You’ll see the word ontology mentioned many times on our site. What does it mean, and why does it matter?
Put simply, ontology is a time-tested way of successfully structuring complex data relationships.
We all know computers need structured data, but some people erroneously believe machine learning and AI overcome this mandatory first step.
Outside a futuristic Hollywood movie you can’t simply dump all your big data in a ‘data lake’ and ask some AI to sort it out. The better structured the data, the better the computation, the better the result. You experience this everyday, from a Google search to an Excel pivot table, a ChatGPT prompt, or a MidJourney illustration.
Garbage in, garbage out.
We used ontology to create a data framework that can deal with non-spatial data, geospatial data, data over time, data that does not exist today. Put simply, data relationships that aren’t simple enough to be stored in tables. A leading-edge graph database is required and ontology provides the umbrella philosophy that guides our mapping of complex connections between multiple datapoints.
Let’s make the philosophical practical.
A valve may physically be a child reliant on the performance of the tank it’s connected to and also a parent controlling the downstream flow through multiple pipes and other valves. Digital twins can identify the related assets you can perform maintenance tasks on when you decide to shut down that first valve, avoiding unnecessary rework and unplanned shutdowns. However, our valve may contain IIOT which controls the flow from its parent tank. The flow is not necessarily one way.
More than that, a stem adapter in our valve may be the same model as stem adapters in other valves on site, and on other sites. But it’s not the model that sparks our attention, it’s the fact that it has been subject to pressure over a certain level from a certain fluid, and hasn’t been serviced within a given period. How do you instantly find all the instances of this circumstance?
Whilst digital twins are fairly linear in 2023, they will increasingly be called on to recreate the complexity of real life, the ability to replicate increasingly complex interrelationships. And like self-driving cars, their standard of competence will not be to be better than humans, but to be perfect. Requiring data structure.
And this is just a start point in time. Extend this complexity over the life of any asset and we quickly realise that any digital twin journey is about layering complex data over time.
Having built digital twins for over 30 years our team developed a platform that will ensure you create digital twins that last the test of time, based on foundational data relationships that can scale and become infinitely complex.
The test? What will your digital twin become in 50 years time?
With Nextspace, you don’t need to know because we’re built to come on that journey with you.