Workshop II: Interpretable Learning in Physical Sciences

October 14 - 18, 2019

Overview

proposed_image (002)An assumption that is often made in physical sciences is that an apparently high-dimensional process can be approximated by a small number of free parameters. Previous IPAM programs have focused on exploring this assumption by using paradigms such as dimension reduction, sparse recovery, clustering and representations with hidden variables, and graphical representation of conditional independence (graphical models). This workshop will have a different focus and will explore representation learning for physical systems, i.e. learning qualitative physics by structures such as the above that have physical meaning. The ultimate goal is not to be limited to find a structure in the data, but to be able to interpret it in terms of fundamental physical principles. The workshop will include methods to summarize and interpret a complicated learned model (e.g. deep neural network) by interrogating this model about what and why it has learned (e.g. relevance propagation and sensitivity analysis).

This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Organizing Committee

Cecilia Clementi (Rice University)
Kyle Cranmer (New York University)
J. Nathan Kutz (University of Washington)
Francesco Paesani (University of California, San Diego (UCSD))
Andrew White (University of Rochester)