An introduction to concentration of measure with applications to graph-based learning, Part 1

Jeff Calder
University of Minnesota, Twin Cities

We will give a gentle introduction to concentration of measure inequalities, which provide quantitative error estimates for the difference between a sum of independent and identically distributed random variables and its mean value. We will start with the Markov and Chebyshev inequalities, and then introduce the moment generating function and prove the Chernoff, Hoeffding, and Bernstein inequalities. We will also briefly describe versions of the inequalities for U-Statistics and discuss Azuma's inequality for Martingales. The second part of the talk will provide an overview of using concentration of measure to prove pointwise consistency of graph Laplacians and discrete to continuum convergence in graph-based learning via the maximum principle.

Presentation (PDF File)

Back to High Dimensional Hamilton-Jacobi PDEs Tutorials