Abstract
Some new bi-fidelity methods for UQ and optimization
Stephen Becker
University of Colorado Boulder
We give an overview of several use-cases and methods of bi-fidelity optimization. We start with the problem of generating samples of a given quantity for the purposes of uncertainty quantification (UQ). A common modern approach is to use train a variational autoencoder (VAE) to create stochastic samples from the target distribution; our contribution is to show how low-fidelity models can be paired with high-fidelity models in order to train the VAE using fewer high-fidelity queries (ref: Cheng et al, CMAME '24). We next turn to deterministic derivative-free high-dimensional optimization, motivated by PDE-constrained optimization, where we adapt a stochastic method, known as stochastic subspace descent (SSD), to exploit low-fidelity queries in its line search (ref: Cheng et al., TMLR '25). For derivative-based optimization, motivated by collision resolution problems for simulating rigid bodies inside a Stokesian flow, we use a coarser discretization to create a low-fidelity model and show how to incorporate this into quasi-Newton methods (ref: Rummel et al., arxiv.org/abs/2604.10089). If time allows, we will discuss ongoing work in low-dimensional Bayesian optimization, using an array of multi-fidelity models, some of which have gradients, also motivated by PDE constrained optimization.
No video available