Workshop II: PDE and Inverse Problem Methods in Machine Learning

Part of the Long Program High Dimensional Hamilton-Jacobi PDEs
April 20 - 24, 2020

Overview

Fig_2Norm_NoisyResearchers in the areas of Partial Differential Equations and Inverse Problems have recently applied ideas from these fields to problems in Machine Learning.  The areas of application include the following.

(i) Generalization: Inverse Problems approaches to Learning Theory, regularization of the loss in Deep Learning, convergence in the data sampling limit.

(ii) Optimization: PDE approaches to Stochastic Gradient Descent, Differential Equations interpretations of accelerated first order optimization methods, Convergent algorithms in Deep Learning.

(iii) Semi-supervised learning: PDEs on Graphs.

(iv) Stable architecture design using numerical stability approaches.

This workshop will bring together researchers with background in PDEs, Inverse Problems, and Scientific Computing who are already working in machine learning, along with researchers who are interested in these approaches.

This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Program Flyer PDF

Organizing Committee

Adam Oberman (McGill University, Mathematics and Statistics)
Lorenzo Rosasco (Universita' degli Studi di Genova)
Dejan Slepcev (Carnegie Mellon University)
Andrew Stuart (California Institute of Technology)
Yunan Yang (New York University)