Parallel sampling for computing rare event probabilities, inverse Bayesian inference and, uncertainty quantification with MOOSE

Daniel Schwen
Idaho National Laboratory

Beyond algorithmic improvements to attain larger length and time scales the increased computational power of exascale machines could be used for parallel sampling of high-fidelity models for computing rare events (such as component failures), for quantifying model uncertainty, for building surrogate models, and for model parameter calibration from experimental and computational data through inverse Bayesian inference.
Parallel sampling strategies such as parallelized Metropolis-Hastings and affine invariant ensemble samplers are being implemented in Idaho National Laboratory’s finite element modeling (FEM) Multiphysics Object-Oriented Simulation Environment (MOOSE). MOOSE can be a driver software for sampling either FEM based high fidelity (HF) models, or arbitrary other "MOOSE-wrapped" simulation codes.
Exascale computing will allow us to sample over higher fidelity models that better capture material failure models (e.g. two- or three-dimensional models over one-dimensional models), generate material model parameterizations with built-in parameter uncertainties, train surrogate models on the fly with active learning, and quantify uncertainty for complex simulation results.

Presentation (PDF File)

Back to Workshop I: Increasing the Length, Time, and Accuracy of Materials Modeling Using Exascale Computing