At this year's NVIDIA GTC conference, Nextspace.com celebrated its arrival in the NVIDIA ecosystem by offering Omniverse technology partners and customers the ability to create federated industrial digital twins.
With four years’ experience creating industrial metaverse projects across the US, Europe, Asia, Middle East, and Australasia, Nextspace.com has developed a data interoperability platform to make it simpler, faster, and cheaper for digital engineers to create digital twins.
Digital twins created on Nextspace’s platform can be federated, joining different virtual worlds. Whilst the idea of virtual worlds might at first sound overly familiar to NVIDIA’s community, what’s different is what lies beneath Nextspace’s polygons.
A single, open graph database of real-world data. Unifying divergent data types including CAD files, BIM, GIS, IoT, IFC, CSV, point clouds, LIDAR and other photogrammetry. Spreadsheets, documents, video, and PDFs. Real-world business data, connected with Pixar’s Universal Scene Description (USD) schema, and accessed through the visual interface of NVIDIA Omniverse.
NVIDIA’s Omniverse offers Nextspace some game-changing benefits. Once connected to Omniverse, Nextspace users can leverage the platform’s multi-GPU, multi-node scalable core rendering and simulation capabilities to enhance their workflow. Plus, the platform’s use of open standards and use of Connectors, plugins to 3rd party applications, avoid the need for multiple APIs to enable live syncs between Omniverse collaborators (for example architects updating an Autodesk Maya file and the corresponding Nextspace’s digital twin).
Mark Thomas, CEO of Nextspace, explains the opportunity:
The challenge we set ourselves was to build a bridge between the Omniverse USD representation of the world, and the business and engineering data representation of the world.
USD is a perfect framework and container for visual information about things, materials, their geometry, fluid dynamics, and the order of loading visual information. What we work on is the unseen layers behind the visual. The interconnected data that’s important to an industrial or business user.
Where USD can reuse designs as shortcuts to communicate ‘this is a tractor’ in any given scene, we are more interested in identifying each tractor as unique in the real world. Not just that tractor’s design, model, and paint finishes, which tyres it has and when they were last changed, who it was purchased from, which engine component parts came from which third-party suppliers, what has been repaired or replaced, its GPS location and mileage, running costs, production load and outputs compared to forecasts and so on.
The Nextspace team faced some challenges in reconciling the business and Omniverse worlds. Three interesting ones were: reintroducing GUID-based unique object identities to the world of USD, bringing real-world geospatial tiles into Omniverse, and streaming large layers of visual and non-visual data in and out of different resolutions.
Josh Wood, Nextspace’s Principal Architect, outlines how USD was perfect for aspects of the challenge that faced the team.
The basic structure of the USD schema is not foreign to how we model interconnected entities.
It has a lot of features that are useful for data like prototypes of engineering assemblies or building components where you need to convey that this individual machine is actually built from a set of individually manufactured components. Or this processing plant was designed one way but built slightly differently. And then there’s a service history of changes recorded. And then we bring in IoT or GPS location time series data and actions, where individual components are replaced with other components but the visual nature of the asset has not changed.
Pixar made the decision to move away from GUIDs within USD because using paths allows a more dynamic workflow that can be simpler and more productive. But in the industrial world, it matters that a small part was sourced from a specific manufacturing batch, had or had not been serviced, or had been replaced.
One of the most interesting things about the project has been the potential for Nextspace to refocus on its core mission. Alex Lavrinovich, Nextspace’s CTO, reflects on this insight.
USD is so much more than a file format. To think of it like that is to think of the internet as just being HTML files. USD delivers scene information from the service to the client but it’s what else is going on – the dynamics and partial loading – that delivers powerful outcomes.
What we realised on the way through was that this project was allowing us to relax the internal pressures we faced with various visualization technologies including Cesium, AR and VR. NVIDIA Omniverse’s visualisation and ecosystem of technology partners help us reconnect on our real mission – being a data unifier, a data exchange, to integrate different data sources.
What we bring to the Omniverse table is a platform to help partners connect data that’s relevant to their projects. Unifying divergent data types into a single model, allowing developers and designers to collaborate. Even though they're not necessarily using the same software, our platform can connect their engineering data under the hood.
Josh Wood elaborates.
Of course, you need to have USD data to fit into the whole ecosystem. So you've got to get that from somewhere. But most business data isn't USD already. As an example, CAD systems are starting to produce USD output as Apple and the AR kit became more popular, but this is not universal or complete.
Whilst Unreal’s Datasmith can import a lot of different engineering data formats, connection to the Omniverse and USD export is ongoing. Typically such import and export have been focused on the graphics and geometry of the data, we're interested in the attributes and metadata as well.
So a data unification still needs to happen but because it’s on an Omniverse scale, the dichotomy of scale and granularity occur. Models in our Industrial Metaverse can have datasets distributed all over the world with potentially millions of entities identified using unique IDs which may or may not have visual representation data and their own unseen data streams.
Clearly, dealing with 60,000 individual rendering objects is not necessarily an efficient way to create a virtual world, so we’ve been working on the workflows required to optimise this, especially for geo-spatial tilesets with underlying objects. Because the name ‘Omniverse’ says it all. It’s meant to host everything. Virtual and real. And future possible simulations. What we’re bringing is the data structure to enable that from the point of view of industry, of real world things.
There’s a pause as what’s been achieved and what work lies ahead sinks in. Mark Thomas breaks the silence and sums it all up.
Over the past four years, we’ve been talking about how our federated digital twins will become the foundational building blocks of the Industrial Metaverse. This vision remains, but perhaps recast as the engineering and business data server, the unseen plumbing, of an industrial Omniverse.