In this talk, we approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropic regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions. This formulation allows us to train a dual objective comprised only of the scalar potential functions, and removes the burden of explicitly computing normalizing flows during training.
I will present the results together with Christopher Finlay and Aram-Alexandre Pooladian.