Workshop II: Theory and Practice of Deep Learning

Part of the Long Program Mathematics of Intelligences
October 14 - 18, 2024

Overview

Modern neural networks operate at unprecedented scale. Their success in fields ranging from natural language processing (e.g., ChatGPT) to structural biology (e.g., AlphaFold) and computer vision (e.g., self-driving cars) is undeniable. Elucidating the nature of emergent properties of learning in these vast artificial intelligences — the central theme for this workshop — lies at the heart of both ML theory and practice.

Neural networks have confounded traditional ML beliefs about the dangers of overfitting and the nature of optimization in high dimensions. They have also given rise to many new empirical findings around feature/transfer learning, adversarial examples, compressibility, scaling laws (relating the size of datasets, models, and compute), the importance of adaptive optimization methods, and so on. What is needed to explain and predict all this is a rich new theory of learning capable of addressing the delicate interplay between model, data, and optimizers at large scale.

This workshop will bring together top researchers driving the frontiers of this work with experts in both theory and experiment for natural intelligence. The result will be a scholarly discussion on how to frame questions about learning and how to distill the similarities and differences between learning with biological and artificial systems.

MOIWS2 Poster

Organizing Committee

Misha Belkin (University of California, San Diego (UCSD))
Boris Hanin (Princeton University)
Julia Kempe (New York University)
Pat Shafto (Rutgers University)