I have pioneered the use of advanced Bayesian statistical methods to quantify and manage uncertainty in the outputs of complex mechanistic models. I have received several grants to develop these methods, including "Managing Uncertainty in Complex Models" (MUCM), the largest single research grant awarded for research in statistics by the UK research councils. I have steadily extended the techniques to address uncertainty analysis, sensitivity analysis, model calibration (also known as history matching or tuning), analysis of dynamic models, data assimilation, validation and value of information.
This field has been growing rapidly, with the title "Uncertainty Quantification" (UQ) gradually becoming the accepted term for (at least some of) these activities. I am a member of the Steering Group for the Uncertainty Quantification and Management (UQ&M) in High Value Manufacturing programme run by Innovate UK.
A hands-on training course is available on the basic theory and methods for uncertainty and sensitivity analyses.
In order to reduce uncertainty in the model predictions, it is usual to "train" or "tune" the model to match available observations of the real process that the model represents. This idea of calibration (and variants such as data assimilation) is pervasive in the disciplines using mechanistic models, yet available methods to do it are crude and laborious. Furthermore, they do not recognise uncertainty in the fitted parameters, and over-fit because they also fail to recognise the inadequacies of the model itself. I can offer expertise in calibration using the efficient Bayesian approach that addresses all of these challenges.
It is a truism that all models are wrong, yet there is considerable interest in "validating" models. Comparing model predictions against reality is the obvious way to do this, yet it is difficult to determine what degree of error in predictions is realistic or acceptable. This is another area where the methods I have created for quantifying uncertainty in model predictions are important, and I can provide validation testing that properly accounts for the various inherent uncertainties in the model predictions.
I am happy to consult on these or other problems concerning uncertainty in complex models.