Supervised Learning In Banach Space (Approximation Theory For Operators)

Andrew Stuart
California Institute of Technology

Consider separable Banach spaces X and Y, and equip X with a probability measure m. Let F: X --> Y be an unknown operator. Given data pairs {x_j,F(x_j)} with {x_j} drawn i.i.d. from m, the goal of supervised learning is to approximate F. The proposed approach is motivated by the recent successes of neural networks and deep learning in addressing this problem in settings where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification). Algorithms which address the problem for infinite dimensional spaces X and Y have the potential to speed-up large-scale computational tasks arising in science and engineering in which F must be evaluated many times. The talk introduces an overarching approach to this problem and describes one such methodology which is built from this approach. Basic theoretical results are explained and numerical results presented for solution operators arising from an elliptic PDE. If time permits other methodologies will be briefly described, and results concerning approximation of the semigroup generated by Burgers equation presented.

Presentation (PDF File)

Back to Hamilton-Jacobi PDEs Culminating Workshop