Workshop I: From Passive to Active: Generative and Reinforcement Learning with Physics

September 23 - 27, 2019

Overview

How can we design a costly experiment in an informationally optimal way? How can we generate complex structures under strong physical constraints? How should we structure the stages of learning or observing in a changing, reactive or adversarial environment? This workshop will address these questions by means of active learning, sequential decision making, experimental design, reinforcement learning, interactive learning or generative learning. In other words, the workshop will examine how to plan experiments in order to use information in a cost-optimal way. It will also include the application of these modalities to training complex models, such as deep architectures, and the transfer of these ideas to the generation of physically-relevant complex structures such as chemical structures, molecular structures, scalar or vector fields in fluid dynamics or electrodynamics, proposal steps for Markov chain Monte Carlo of physical systems etc. In all of these areas, we would like to be able to generate fairly complex structures that have rather strict physical constraints (such as conservation laws, differentiability, smoothness, etc). The constraints make generation of valid structures harder on one hand, but they may also be used to guide the search.

Organizing Committee

Alán Aspuru-Guzik (Harvard University)
Frank Noe, Chair (Freie Universität Berlin)
Ankit Patel (Rice University)
Katya Scheinberg (Lehigh University)
Ruth Urner (York University)