Virtual Talk: Towards Higher-Order and Disentangled Explainable AI

Gregoire Montavon
Freie Universität Berlin

The field of Explainable AI has produced methods that can robustly identify for a broad range of complex nonlinear ML models, input features (e.g. pixels) that are the most relevant. Many explanation techniques, including LRP, can be called first-order techniques. They extract for a given example and its associated prediction a set of scores representing the contribution of each input feature to the prediction. In this talk, new developments will be presented that build on LRP and increase its expressiveness. These developments include (1) the identification of pairs or collections of features that jointly contribute to the model's decision (i.e. higher-order explanations), and (2) the decomposition of the original explanation into multiple disentangled components that jointly explain the overall prediction.

Presentation (PDF File)

Back to Explainable AI for the Sciences: Towards Novel Insights