Learning physical dynamics and a dynamical sampling algorithm

Nisha Chandramoorthy
Georgia Institute of Technology

We can successfully train neural networks or other data-based models to reproduce a "map" or a short-time integration


of an ODE/PDE model. Successful training generally means that the learned model produces orbits close to the true model on

expectation over initial conditions; that is, the learned model generalizes well. However, generalization does not imply the model

learns the ground-truth statistics or long-term behavior or that it has predictive skill for any other quantity of interest that is

dependent on its long-range behavior. A model that generalizes well may not always be a physical representation of the ground

truth. In the first half of the talk, we give sufficient conditions under which a model that generalizes well is also physical if

Jacobian information is added to the training loss. We explain why Jacobian information can lead to statistical accuracy using

the concept of shadowing in dynamical systems.


In the second half, we propose a sampling algorithm based on an infinite-dimensional score-matching method. In this method,

which we call score operator Newton or SCONE, we recursively compute a transport map between a tractable source probability

density and the target density. The recursion is derived as a Newton-Raphson method for the zero of the score residual operator

(on the space of transport maps). We discuss convergence guarantees and theoretical as well as algorithmic implications of
SCONE. The first half is joint work with Jeongjin Park and Nicole Yang, while the second half is joint work with Youssef Marzouk.