White Paper: Machine Learning for Physics and the Physics of Learning
This white paper is an outcome of IPAM’s fall 2019 long program, Machine Learning for Physics and the Physics of Learning.
During the last couple of decades advances in artificial intelligence and machine learning (ML) have revolutionized many application areas such as image recognition and language translation. The key of this success has been the design of algorithms that can extract complex patterns and highly non-trivial relationships from large amounts of data and abstract this information in the evaluation of new data. In the last few years these tools and ideas have also been applied to, and in some cases revolutionized problems in fundamental sciences, where the discovery of patterns and hidden relationships can lead to the formulation of new general principles.
This IPAM program focused on the opportunities and challenges in the application of ML tools in the physical sciences and if/how theoretical results in the physical sciences can help in the definition of new ML methods. The program hosted four workshops (WS) focusing on different aspects of the overarching topic:
- WS1 “From Passive to Active: Generative and Reinforcement Learning with Physics” focused on novel machine learning models for designing new molecules or materials, synthesis pathways, and optimal controls for dynamical systems.
- WS2 “Interpretable Learning in Physical Sciences” focused on the need to develop interpretable ML methods to understand the ML predictions in terms of physically meaningful quantities, in order to advance our understanding.
- WS3 “Validation and Guarantees in Learning Physical Models: from Patterns to Governing Equations to Laws of Nature” focused on learning equations, i.e. interpretable and extrapolatory models from data, on modeling dynamical systems and low-dimensional manifolds, on bounding errors and statistical aspects model selection.
- WS4 “Using Physical Insights for Machine Learning” focused on applying insights or models from physics to make progress in developing new ML models and algorithms, or to better understand why successful ML models such as stochastic gradient descent in deep learning frameworks work well.
In addition to the workshops, we formed multiple working groups that met regularly during the program and tackled different subtopics. Subsequently, the state of the discussion and outcomes on these different subtopics are described. In particular, we discuss the open challenges that have been identified and that we as a group plan to continue to investigate in the future.