Tony O'Hagan - Consulting


Consultancy on uncertainty in mechanistic models


My expertise

I have pioneered the use of advanced Bayesian statistical methods to quantify and manage uncertainty in the outputs of complex mechanistic models. I have received several grants to develop these methods, including "Managing Uncertainty in Complex Models" (MUCM), the largest single research grant awarded for research in statistics by the UK research councils. I have steadily extended the techniques to address uncertainty analysis, sensitivity analysis, model calibration (also known as history matching or tuning), analysis of dynamic models, data assimilation, validation and value of information.

This field has been growing rapidly, with the title "Uncertainty Quantification" (UQ) gradually becoming the accepted term for (at least some of) these activities. I am a member of the Steering Group for the Uncertainty Quantification and Management (UQ&M) in High Value Manufacturing programme run by Innovate UK.

A hands-on training course is available on the basic theory and methods for uncertainty and sensitivity analyses.

My clients

This has so far been primarily an academic activity. However, I have consulted on applications with the National Radiological Protection Board and WRc (a water industry research organisation). There is considerable scope for more commercial applications.

Typical challenges

Models are used in almost every field of industry, commerce, technology and policy-making. They describe complex real-world processes through mathematical equations, and are typically implemented in computer programs that can be expensive and time-consuming to run. High profile examples are models of climate (and of climate change), models of large engineering structures such as buildings, engines and aircraft, and risk assessment models for nuclear installations. Decisions with very large financial (and other) implications are often made using such models. Model users are increasingly asking how much they can trust the model predictions. Given the inevitable uncertainty about parameters in a model, and about the assumptions made in building it, how uncertain are its outputs? I have developed innovative methods for doing uncertainty and sensitivity analyses that are orders of magnitude more efficient than other available approaches.

In order to reduce uncertainty in the model predictions, it is usual to "train" or "tune" the model to match available observations of the real process that the model represents. This idea of calibration (and variants such as data assimilation) is pervasive in the disciplines using mechanistic models, yet available methods to do it are crude and laborious. Furthermore, they do not recognise uncertainty in the fitted parameters, and over-fit because they also fail to recognise the inadequacies of the model itself. I can offer expertise in calibration using the efficient Bayesian approach that addresses all of these challenges.

It is a truism that all models are wrong, yet there is considerable interest in "validating" models. Comparing model predictions against reality is the obvious way to do this, yet it is difficult to determine what degree of error in predictions is realistic or acceptable. This is another area where the methods I have created for quantifying uncertainty in model predictions are important, and I can provide validation testing that properly accounts for the various inherent uncertainties in the model predictions.

I am happy to consult on these or other problems concerning uncertainty in complex models.


My home page My academic pages My consulting pages Software and other resources My personal pages Contact me

Updated: 18 January 2017
Maintained by: Tony O'Hagan