Generalization Theory in Machine Learning

Adam Oberman
McGill University
Mathematics and Statistics

Statistical learning theory addresses the following question. Given a sample of data points and function values, and a parameterized function (hypothesis) class can we find a function in $H$ which best approximates $f$?

Statistical learning theory has superficial similarities to classical approximation theory, but overcomes the curse of dimensionality by using concentration of measure inequalities.
Learning bounds are available for traditional machine learning methods (support vector machines (SVMs), and kernel methods), but not for deep neural networks.

In this tutorial, we will review the generalization theory for traditional machine learning methods. We will also point out where deep learning method differ. Finally we will discuss some new methods and possible future research directions in this area.

Presentation (PDF File)

Back to High Dimensional Hamilton-Jacobi PDEs Tutorials