Many advanced techniques now exist to compute the numerical solution of an Optimal Transport problem, with rigorous guarantees. The methods emerge from the interaction between Optimal Transport and other important fields in applied mathematics such as statistics and numerical analysis. In particular, OT estimators have been developed in the commutative case with good guarantees, both for discrete measures [1,2,3] and for continuous probabilities [4]. The first approaches have already led to important applications in machine learning and deep learning. For the second, though statistical approaches achieving low error even in high dimension are known [5], their computational complexity is still of the order of ε^(-d/2) to achieve error ε on a d-dimensional problem. More recently, benefitting from the interaction with approximation theory and machine learning, a new family of methods has been devised that obtain a computational complexity of ε^(-4) for very smooth probabilities [6]. The methods cast the dual of the optimal transport problem in terms of a novel family of Sums-of-Squares optimization problems, which are not necessarily polynomials.
A function is a sum of squares (SOS) when it can be written as f(x) = φ(x)^* Aφ(x) for a map φ : X → C^n and A that is a positive semidefinite matrix. This allows representing a function that is non-negative, but still linear in the parameters. By using this model in a convex optimization problem on non-negative functions, the resulting problem is a semidefinite program (SDP). So in the particular case of the algorithm above, OT is represented in terms of a PSD matrix and can be formulated as an SDP.
The goal of the workshop is to foster the discussion at the intersection between OT and the numerical domains recalled above, and to explore the following open questions:
[1] J. Altschuler, J. Weed, and P. Rigollet, Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration, in Advances in Neural Information Processing Systems, 2017, pp. 1964–1974.
[2] M. Cuturi, Sinkhorn distances: Lightspeed computation of optimal transport, in Advances in Neural Information Processing Systems, 2013, pp. 2292–2300.
[3] M. Cuturi and G. Peyré, A smoothed dual approach for variational Wasserstein problems, arXiv preprint arXiv:1503.02533, (2015).
[4]J. Weed and F. Bach, Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance, Bernoulli, 25 (2019), pp. 2620–2648.
[5] J. Niles-Weed and Q. Berthet, Minimax estimation of smooth densities in Wasserstein
distance, The Annals of Statistics, 50 (2022), pp. 1519–1540
[6] A. Vacher, B. Muzellec, A. Rudi, F. Bach, FX Vialard. A Dimension-free Computational Upper-bound for Smooth Optimal Transport Estimation. COLT 2021: 4143-4173
This workshop will include a poster session. A request for posters will be sent to registered participants before the workshop.
Quentin Berthet
(Google)
Jonathan Niles-Weed
(New York University)
Alessandro Rudi
(Ecole Normale Supérieure)