Deep learning is different from other machine learning methods: it can be blended seamlessly into larger programs that are optimized end-to-end, an emerging paradigm known as “differentiable programming.” This paradigm is particularly advantageous in the physical sciences, because it allows for combining machine learning with the well understood models and numerical methods of scientific computing.
In this talk, I will present two examples [1, 2] of how differentiable programming can enable interpetable learning in the context of simulating partial differential equations. First, I’ll show how deep learning can be used to improve discretizations inside numerical methods for solving partial differential equations, allowing for accurate simulations at much coarser scales. Second, I’ll show how deep learning can be used to reparameterize optimization landscapes in PDE constrained structural optimization problems, significantly increasing the quality of resulting designs.
 Bar-Sinai*, Y., Hoyer*, S., Hickey, J. & Brenner, M. P. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 201814058 (2019). doi:10.1073/pnas.1814058116
 Hoyer, S., Sohl-Dickstein, J. & Greydanus, S. Neural reparameterization improves structural optimization. arXiv [cs.LG] (2019). https://arxiv.org/abs/1909.04240
Back to Workshop II: Interpretable Learning in Physical Sciences