VVUQ your model and twin – should we trust ?

The world is moving towards digital twins.
I recently came across a insightfull article:
A probabilistic graphical model foundation for enabling predictive digital twins at scale, available at Arxiv – & published Nature

The digital twin is a set of coupled computational models that evolve over time to persistently represent the structure, behavior, and context of a unique physical asset. The lab develops the methods and algorithms that enable creation of a Predictive Digital Twin. The physical asset is a custom-built 12ft wingspan unmanned aerial vehicle (UAV). The structural Digital Twin of the UAV is used to monitor vehicle structural health and drive dynamic flight planning decisions. The Predictive Digital Twin combines scientific machine learning with predictive physics-based models. Component-based reduced-order modeling makes the approach computationally efficient and scalable.

Scaling digital twins from the artisanal to the industrial, is reasearch, published by the same lab, and states:
“In addition to physics models, we emphasize the synergy between mechanistic and statistical models in enabling the vision of digital twin. […] These cases place new demands on the speed, robustness, validation, verification and uncertainty quantification [VVUQ] in digital twin creation workflows.”

Verification, validation and uncertainty quantification for predictive digital twins: To truly realize value, a digital twin must do more than just reflect the evolving state of its physical twin; it must be predictive, that is, the digital twin must be able to issue predictions about yet-unseen conditions and future states that the system may find itself in. The digital twin must be able to support—with confidence—assessments of what-if scenarios that support critical decisions:
– what if the damaged unmanned aerial vehicle is permitted to complete its package delivery mission?
– How will a specific patient respond to a specific therapy?
Predictions will be used to inform decisions across a spectrum of applications and corresponding risks. When it comes to high-stakes decisions, these predictions must be based on verified code and validated models, and must be equipped with an assessment of their uncertainty.

Achieving digital twins at scale will require a drastic reduction in technical barriers to the adoption of digital twins. Analogous to past powerful numerical techniques, there will need to be considerable investment in the mathematical, numerical, computational science and computer science methods that underpin digital twins to enable robust, rapid and accessible deployment methods.
The use of digital twins in operational decisions will also require new workflows; for example, when and how should information from a digital twin enter clinical decision workflows? Who has access to information from an aircraft digital twin: is it the owner, the pilot or the manufacturer?
Finally, it is worth noticing that the operational computing costs for maintaining and updating digital twins at scale will need to be considered in the context of the benefits; the energy required to train advanced AI, for instance, is already a recognized cost in deploying this technology.
The increased use of digital twins across sectors will lead to a maturation in what to expect from a digital twin and an understanding on how and where digital twins can add value and inform decisions. Such a better understanding of costs and rewards with decreased uncertainty and risks in developing and deploying digital twins will be key to accelerating and scaling the adoption of digital twins.


Uncertainty Quantification for Digital Twins
Establishing reliability and trust in digital models and DTs (Digital Twin)s is crucial for their adoption in practice. UQ (Uncertainty Quantification) is an enabling technology for initiating the assessment of these traits. With access to information about where the DTs are confident or uncertain, designers, operators, and other stakeholders can become aware of the different possible responses and outcomes. Consequently, informed decisions can be made on control, design, policy, or further experimentation. UQ therefore promotes transparency of the DT, and is a
crucial component of decision support systems.
Further, combined with code verification and model validation, the VVUQ (verification, validation, and uncertainty quantification) system has grown to become the standard in many fields of computational science and engineering.

The article Digital Twin Concepts with Uncertainty for Nuclear Power Applications presents a technical overview on a number of key UQ tasks that fall under the interaction cycle between the physical and digital twins .
– forward UQ to propagate uncertainty from digital representations to predict behavior of the physical asset,
– inverse UQ to incorporate new measurements obtained from the physical asset back into the DT, and
– optimization under uncertainty to facilitate decisions of experiments that
maximize information gain, or actions that maximize performance for the physical asset performance under an uncertain environment.

Challenges pertaining to UQ for DTs, reside primarily within the areas of computational speed, integration among different UQ tasks, and the role of UQ within the more expansive goal of establishing DT trust.


When we can trust computers (and when we can’t) suggest as solution:
For modelling, we need to tackle both epistemic and aleatoric sources of error.
To deal with these challenges, a number of countermeasures have been put forward:
– documenting detailed methodological and statistical plans of an experiment ahead of data collection (preregistration):
– demanding that studies are thoroughly replicated before they are published;
– insisting on collaborations to double-check findings;
– explicit consideration of alternative hypotheses, even processing all reasonable scenarios;
– the sharing of methods, data, computer code and results in central repositories, such as the Open Science Framework, a free, open platform to support research, enable collaboration and ‘team science’; and
– blind data analysis, where data are shifted by an amount known only to the computer, leaving researchers with no idea what their findings implied until everyone agrees on the analyses and the blindfold is lifted.

When it comes to the limitations of digital computing, research is still required to determine how reliably we can compute the properties of […] chaotic dynamical systems on digital computers. Among possible solutions, one that seems guaranteed to succeed is analogue computing, an older idea, able to handle the numerical continuum of reality in a way that digital computers can only approximate

In the short term, notably in the biosciences, better data collection, curation, validation, verification and uncertainty quantification procedures […], will make computer simulations more reproducible, while ML will benefit from a more rigorous and transparent approach.
The field of big data and ML has become extremely influential but without big theory, it remains dogged by a lack of firm theoretical underpinning ensuring its results are reliable.
Indeed, we have argued that in the modern era in which we aspire to describe really complex systems, involving many variables and vast numbers of parameters, there is not sufficient data to apply these methods reliably. Our models are likely to remain uncertain in many respects, as it is so difficult to validate them.

When we can trust computers (and when we can’t) concludes:
In the medium term, AI methods may, if carefully produced, improve the design, objectivity and analysis of experiments. However, this will always require the participation of people to devise the underlying hypotheses and, as a result, it is important to ensure that they fully grasp the assumptions on which these algorithms are based and are also open about these assumptions.

It is already becoming increasingly clear that ‘artificial intelligence’ is a digital approximation to reality. Moreover, in the long term, when we are firmly in the era of routine exascale and perhaps eventually also quantum computation, we will have to grapple with a more fundamental issue.
Even though there are those who believe the complexity of the universe can be understood in terms of simple programs rather than by means of concise mathematical equations, digital computers are limited in the extent to which they can capture the richness of the real world.
Freeman Dyson, for example, speculated that for this reason the downloading of a human consciousness into a digital computer would involve ‘a certain loss of our finer feelings and qualities’.
In the quantum and exascale computing eras, we will need renewed emphasis on the analogue world and analogue computational methods if we are to trust our computers.

2 responses to “VVUQ your model and twin – should we trust ?”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: