A random variable is `concentrated' if it stays close to a fixed constant with high probability. The most basic examples are long averages of i.i.d. random variables, whose concentration is promised by the law of large numbers. However, concentration occurs in many other settings, even when exact calculations of moments or other parameters are impossible. This is because concentration often results from some natural `geometric' structure on the underlying probability space. For instance, large classes of random variables exhibit concentration because they are Lipschitz functions on certain `high-dimensional' spaces that carry both metrics and probability measures, such as Hamming cubes or high-dimensional spheres.
Lecture 2 will cover some of the most classical applications of the concentration inequalities from Lecture 1. Depending on time and interest, these may include results from the geometry of Banach spaces, on the asymptotic distribution of random matrices, and from combinatorial optimization such as estimates on the chromatic number of random graphs. (I will let Lecture 1 spill over to Lecture 2 and omit some of these applications if necessary.) Pre-requisites: same as Lecture 1.