An empirical look at generalization in neural nets

Tom Goldstein
University of Maryland

Generalization in neural nets is a mysterious phenomenon that has been studied by many researchers, but usually from a purely theoretical angle. In this talk, we use empirical tools and visualizations to investigate why generalization is a mystery, how "good" minima in neural loss functions are qualitatively different from "bad" ones, and why optimizers are biased towards "good" minima.


Back to Workshop II: PDE and Inverse Problem Methods in Machine Learning