Sparsity-enforced regularizations for optimal learning of high-dimensional systems from random data

Clayton Webster
Oak Ridge National Laboratory

This talk will focus on compressed sensing approaches to sparse polynomial approximation of complex functions in high dimensions. Of particular interest is the parameterized PDE setting, where the target function is smooth, characterized by a rapidly decaying orthonormal expansion, whose most important terms are captured by a lower (or downward closed) set. By exploiting this fact, we will present and analyze several procedures for exactly reconstructing a set of (jointly) sparse vectors, from incomplete measurements. These include novel weighted $\ell_1$ minimization, improved iterative hard thresholding, mixed convex relaxations, as well as nonconvex penalties. Theoretical recovery guarantees will also be presented based on improved bounds for the restricted isometry property, as well as unified null space properties that encompass all currently proposed nonconvex minimizations. Numerical examples are provided to support the theoretical results and demonstrate the computational efficiency of the described compressed sensing methods.


Back to Workshop II: HPC and Data Science for Scientific Discovery