Sampling from high-dimensional posterior distributions in Bayesian Inverse Problems (BIPs) suffers from the curse of dimensionality and is in general intractable. For a class of BIPs whose forward problem enjoys spatial regularity (coarse grid discretization serves as a good approximation), we propose to train an invertible multi-scale deep generative network (MsDGN) to approximate their posterior distributions. Our invertible MsDGN can be trained efficiently in a coarse-to-fine multi-stage manner. After training is finished, it is extremely efficient to sample from the MsDGN and its samples are guaranteed to be iid. We demonstrate the accuracy and efficiency of our proposed invertible MsDGN by three examples: generating realistic natural images, like human faces; an synthetic high-dimensional BIP whose ground-truth solution is known; an high-dimensional BIP whose forward problem is an elliptic equation and ground-truth solution is unknown.
Back to Workshop II: PDE and Inverse Problem Methods in Machine Learning