Enabling predictive simulations for design and decision making under a limited budget

Ilias Bilionis
Purdue University

Despite the indisputable successes of modern computational science and engineering, the in- crease in the predictive abilities of physics-based models has not been on a par with the advances in computer hardware. There is a simple reason why this, seemingly unintuitive, fact is observed. When presented with the ability to use more computational power, domain scientists -as expected- take on the challenge of creating even more realistic models. This increase in model complexity has two effects. First, the dollar cost per simulation (system design, deployment, maintenance, and power consumption) becomes significantly higher, even though the wall time of each simulation remains more or less the same. Second, more realism requires the introduction of more parameters to describe boundary and initial conditions, material properties, geometric imperfections, constitutive laws, etc. Since it is impossible, or impractical, to measure the majority of these parameters, their uncertainty needs to be quantified and taken into account. However, proper uncertainty quantification (UQ), typically, requires an exponentially increasing number of model evaluations as the number of unknown parameters increases. The high cost of information necessarily restricts our attention to a finite set of points in the design/stochastic space. Information acquisition decisions for UQ tasks involve deciding 1) which simulation model to sample from; 2) where to sample the design/stochastic space, and; 3) when to stop sampling. In this talk we demonstrate how a fully Bayesian formalism can be used to lay the foundations of a unifying mathematical and computational paradigm for carrying out generic UQ tasks using a limited simulation budget. Specifically, we show how such an approach captures the epistemic uncertainty induced by the limited number of simulations. This uncertainty propagates to all quantities of interest, so that we can actually ask questions like: “What is our uncertainty about the mean or the variance of the response?”, “How sure our we that this is the true posterior of the calibrated parameters?”, or “How certain are we about the location or the value of the maximum of an objective?” Furthermore, we show how this epistemic uncertainty can be exploited to construct task-specific information acquisition policies. To demonstrate the unifying nature of these ideas, we show results from applications to uncertainty propagation, inverse problems, and optimization under uncertainty.

Presentation (PDF File)

Back to Uncertainty Quantification for Multiscale Stochastic Systems and Applications