Emerging Algorithms and Mathematical Paradigms for Reliable Next-Generation AI
Overview
Sampling is a fundamental algorithmic task that underpins core applications in Bayesian inference, scientific simulation, and uncertainty quantification. Today, these methods are the driving force behind modern AI, powering diffusion models for image generation and LLMs for text synthesis. As AI becomes deeply integrated into society, developing rigorous performance guarantees for accuracy, safety, and reliability is essential. Since these systems are inherently probabilistic, understanding them requires an interdisciplinary approach that connects foundational mathematics with computational practice.
The sheer scale of modern AI—requiring thousands of GPUs and massive datasets—demands that theoretical frameworks evolve alongside hardware. While mathematics provides the bedrock, closer contact between academics and industrial practitioners is necessary to accelerate the practical application of mathematical ideas and address urgent challenges in large-scale computation.
This workshop convenes leaders from academia and industry to catalyze dialogue across: (i) mathematics: probability, statistics, and geometry, (ii) computation: machine learning, AI, and data science, and (iii) industry: research labs focused on real-world deployment. Together, we will explore future directions in sampling to ensure AI remains robust, interpretable, and mathematically sound.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
Organizing Committee
Ben Leimkuhler
(University of Edinburgh)
Soledad Villar
(Johns Hopkins University)
Andre Wibisono
(Yale University)
