We present recent results on the solution of problems related to the theory of Optimal Transport by using an efficient parallel implementation of the Network Simplex algorithm. In the Machine Learning community, the mainstream Optimal Transport algorithms are based on regularized versions of Optimal Transport problems, which are solved using variants of the Sinkhorn’s algorithm. When the input probability measures are discrete, those problems can be formulated via Linear Programming (LP), but standard primal, dual, and barrier LP algorithms are too slow for practical ML applications. In this talk, we present an efficient parallel implementation of the Network Simplex, which can run both on multi-core and GPU accelerated hardware, and which permits the solution to optimality of very large size instances. We present the applications of our parallel Network Simplex algorithm to two applications: the computation of the “Gene Mover Distance” between pairs of gene expressions, and the computation of Wasserstein Barycenters for a given set of images.
Back to Deep Learning and Combinatorial Optimization