Model Interpretability for Building Confidence and Sparking Insight in Scientific Applications

Julia Ling
Citrine Informatics

Interpretability lets practitioners diagnose poor performance in their machine learning models and uncover sample bias in their training sets. Furthermore, in scientific contexts, it sparks new physical insights for scientists and engineers, enabling human learning alongside machine learning.

In this talk, I will describe three different approaches to interpretability. I will start with the
domain of turbulence modeling for turbomachinery applications and describe how spatiallyresolved
feature importance visualizations can drive deeper understanding of the physical
mechanisms at work in turbulent flows. I will then move on to the application of materials
development, explaining how non-linear sensitivity analysis can be used to visualize the
predicted performance of candidate materials. Finally, I will describe approaches to quantifying the performance of sets of candidate materials to better guide project direction.


Back to Long Programs