Existing approaches to interpretability largely focus on fixed model families (e.g., decision trees, sparse linear models, etc.) over features. Program synthesis offers a powerful alternative: rather than use an existing model family, the user can define new, custom model families tailored to their problem via a domain-specific programming language (DSL). In particular, a DSL describes constructs (e.g., if-then-else statements, while loops, etc.) that be composed together to create complex programs, which form the model family. I will describe our work on leveraging program synthesis to learn interpretable control policies, as well as ongoing work using these ideas to train interpretable models for RNA splice prediction.
Back to Explainable AI for the Sciences: Towards Novel Insights