The amazing flexibility of a concept
Digital twins are the hype of the moment in automation. But the concept is rather ambiguous: people refer to it in many different ways. To bring some order in this confusion, I tried to find a classification of digital twins. It turns out that a single classification is not enough.
Everyone probably agrees that a digital twin is some sort of digital representation of an actual device or asset. But the consensus stops there. Some would say that a collection of data from a device is already a digital twin. Others use the concept to refer to the digital design documents of a device. Still others think that simulation is an essential part of a digital twin. Or that an online connection to sensors is essential.
At some point, I became curious to see whether the various opinions could be condensed into a limited number of ‘canonical’ meanings. That would help me to quickly assess the particular category that someone is referring to when talking about digital twins. So, I did a little Googling and some thinking. What I found is that there is not just one way to classify digital twins. In fact, you can classify digital twins in many different ways. Here are four different classifications that I found useful and one that is less so.
1. The level of abstraction
Wikipedia provides a first classification of digital twins. When I checked, it mainly looked at the type of data that is held. The three categories are prototypes, instances and aggregates.
A prototype digital twin is the collection of all the information needed to create an asset. That would include design documents, but also production guidelines, etc. It seems that this is all static data, even though a design may change over time as new versions of the asset are conceived.
The second category, digital twin instances, are digital representations of a single device or asset. These would collect the data about a single instance, thus building a digital trail of the history of that particular device. This digital trail comprises the various sensor readings from the device. But it also records when inspections or maintenance have been performed, what the specific usage history is, when failures occurred, etc.
Finally, the aggregate type of digital twins collects the data from a large collection of devices. For example, an OEM could collect the data of all the devices that have been installed at clients into a single collection. This would give a representation of the generic object rather than a single instance. The digital twin aggregate would be the place to collect statistics: what is the average time between failures and what is the standard deviation; what is the part that fails most often; what is the typical load applied to the machines?
2. From parts to complete systems and beyond
Digital twins can also be classified according to the level of complexity of the asset that they represent. Mouser.com sees a hierarchy consisting of three levels: parts, products and systems. The classification is more or less obvious: digital twins of parts represent a single part of a device and can be constructed and used by the parts manufacturer. The product digital twin combines digital twins of parts into a digital twin of a complete device. On the system level, the digital twin will consist of the digital twins of a large number of interacting devices.
In fact, Mouser.com also discerns a fourth category, called process, to highlight the fact that digital twins need not only apply to physical devices. That observation is very true, but this extra category is, in my opinion, not a logical extra to the first three. The use-case-oriented classification in the next section has a more logical place for this category.
The classification along the level of complexity brings out one of the important features that will make or break the digital twin economy in the time to come: will digital twins from one producer be compatible with those of other producers. Ideally, the parts manufacturer would ship his product with a digital twin (either prototype, instance or aggregate) which can then be used by the product manufacturer to build a digital twin of the product. This would call for some sort of standardization.
Efforts for that are already on the way. The German Association of Engineers came up with a standard for digital factories (VDI 4499) and there is also the Reference Architecture Model Industry 4.0 (RAMI 4.0) which supposedly offers standardized building blocks for building digital twins. In the construction sector, Building Information Management (BIM) is a natural starting point even though it’s definitely not a standard for digital twins by itself. In the UK, the Centre for Digitally Built Britain came up with the aptly named Gemini principles to standardize digital twin development in the built environment. In Barcelona, suppliers to the local government are obliged to provide digital representations of their products so that it can be integrated in the smart city environment. This enforces some sort of standardization.
However, the fact that standards are defined does not mean that the problem is solved. There is still a very real danger that we will end up with many different, incompatible standards. And major suppliers will be tempted to put their own standard forward for others to follow, which will not help convergence. That is one of the reasons why I advocated an open source environment for digital twins in one of my earlier posts.
3. A digital twin for each use case
A third way to classify digital twins is by looking at their use case. For example, Siemens talks about three classes: product, production and performance. The product class focuses on individual products and as such is the same as the product category in the previous classification. The production category is a representation of a production process and is used to design processes that are robust and flexible. The last class, performance digital twins, gather operational data of a production process to optimize the performance.
In a somewhat similar fashion, Xmpro discerns digital twins for status monitoring, for operations and for simulation. The status digital twin simply reflects the current status of a device. It’s basically just a dashboard with status information. The operations digital twin is similar to what used to be called a supervisory control and data acquisition (SCADA) system. It is used to control a device or system. The simulation digital twin is used to perform what-if scenarios to explore the effect of potential changes to the system and its settings.
This kind of classification is also behind an image that I saw in a presentation by IBM a while ago. Around a big “Digital Twin” term in the center of a circle, there were two band of terms, with terms like “Asset twin”, “Operator twin”, “Engineering twin” in the first band and terms like “Insurance twin” and “Compliance twin” in the second band.
Classifying digital twins according to use case offers a wide range of possible digital twin categories. One might discern digital twins for design, for construction, for financing, for maintenance and for operating an asset. This seems to be the idea behind the Digital Tapestry concept from Lockheed Martin, which combines all the different digital views on a certain asset.
Obviously, these are not separate digital twins: in the core there will be a single, coherent data model. Changes in the design digital twin, for instance, will have implications in the operational digital twin. So, they should be coupled on the data level.
But they will probably have different functionality: the design digital twin may have a model for construction mechanics whereas the financial digital twin will have a financial model of investment, operational costs and benefits. The interface will also be entirely different: the design digital twin will probably have flashy 3D views and the financial digital twin will have the usual boring tables and indicators.
4. Level of sophistication
The excellent report by ARUP uses yet another classification of digital twins by looking at their sophistication. This sophistication is expressed in terms of autonomy, intelligence, learning and fidelity. The levels range from 1, being a non-intelligent, non-autonomous digital twin, to 5, which is a digital twin that replaces human beings for certain non-trivial tasks.
It is perhaps not an easy classification scheme to use because digital twins may score high on one aspect and low on another, begging the question which overall level is appropriate. But what I find interesting about it is that ARUP obviously expects digital twins to become more intelligent and autonomous.
In fact, one of the reasons why digital twins are so important is that they make physical assets amenable to artificial intelligence. Without a digital twin of some form, there would be nothing that the AI could work on. Are we building digital twins just to feed artificial intelligence so that they can take over the tasks that we want to automate? I don’t think so. I think digital twins are also useful without the AI, but it is certainly an aspect to consider.
5. Applications
A final, but probably less useful way to describe digital twins is by considering the application field. There are digital twins for water management, for the oil sector, for power grids, for buildings and for human bodies, to name just a few.
By saying that this is a less useful classification, I’m not being disrespectful for any of these digital twins. It’s just that the application field does not add much information about the digital twin as such.
Nevertheless, looking at digital twins along the application perspective will show that the adoption of the concept in some fields is way beyond that in other sectors. That could be the starting point to analyze what the cause of the differences is and may offer clues to stimulate the development in fields that are lagging in terms of digital twin adoption.
So how is this helpful?
All these classifications have one thing in common: they tell us next to nothing about the underlying technology. My feeling is that on the technical level, there is probably not too much difference between all these categories. They all have a collection of data, some model of the asset, a user interface and/or API and perhaps some intelligence. That’s basically all there is to a digital twin.
The benefit of the classifications that I just described is that they show the kind of functional requirements that digital twins may have to fulfill. This will be all the more important if, at some point, interoperability of digital twins is expected by their users. Then, the architecture of the digital twin will have to accommodate all these use cases.
So apart from being helpful in talking about digital twins, understanding these classifications is, in my opinion, also essential for those of us, like VORtech, who are actually building them.