On algorithms in high dimensions

Gregory Beylkin
University of Colorado Boulder
Dept. of Applied Math

The talk will review two representations of multivariate
functions and associated algorithms that avoid the "curse of
dimensionality" in high dimensions.

First, we will consider separated representations of functions
and operators and briefly discuss their applications to
multivariate regression and ML. In the statistics literature
representations of such form appear under the names “parallel
factorization” or “canonical decomposition” and has been used
primarily to analyze data on a grid (typically in dimension d=3)
and do not construct the underlying function. We view separated
representations as a nonlinear method to track a function in a
high-dimensional space while using a small number of parameters.
Note that a traditional approach to characterizing a wide class
of low-complexity functions using smoothness and decay of
derivatives does not appear to work in high dimensions.

Second, we will consider multivariate mixtures (a more general
class of functions than separated representations), the
corresponding reduction algorithm and several applications in
numerics as well as in data science.

Back to Workshop I: Analyzing High-dimensional Traces of Intelligent Behavior