December 1 - 6, 2024
Boston, Massachusetts
Symposium Supporters
2024 MRS Fall Meeting & Exhibit
MT04.04.01

Explainable AI Enables Molecular Design

When and Where

Dec 3, 2024
1:30pm - 2:00pm
Hynes, Level 2, Room 210

Presenter(s)

Co-Author(s)

Pascal Friederich1,Jonas Teufel1,Rebecca Davis2

Karlsruhe Institute of Technology1,University of Manitoba2

Abstract

Pascal Friederich1,Jonas Teufel1,Rebecca Davis2

Karlsruhe Institute of Technology1,University of Manitoba2
Most current explainable AI methods are post-hoc methods that analyze trained models and only generate importance annotations, which often leads to an accuracy-explainability tradeoff and limits interpretability. Here, we propose a self-explaining multi-explanation graph attention network (MEGAN) [1]. Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels, the number of which is independent of task specifications. This proves crucial to improve the interpretability of graph regression predictions, as explanations are intrinsically connected to their effect on the predictions. This makes MEGAN a successful attempt to escape the accuracy-explainability dilemma of post-hoc explanation models.<br/>We first validate our model on a synthetic graph regression dataset with known ground-truth explanations. Our network outperforms existing baseline explainability methods such as GNNExplainer [2] for the single- as well as the multi-explanation case, achieving near-perfect explanation accuracy during explanation supervision. We demonstrate our model’s capabilities on multiple real-world datasets, e.g. molecular solubility prediction. We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks.<br/>Finally, we demonstrate in a real-world task in the area of molecular discovery that the MEGAN model can be combined with counterfactual approaches for predictive in-silico molecular design, which we validated experimentally [3]. To our knowledge, this is the first example of rational molecular design with explainable machine learning models. We currently extend our model toward explaining 3D crystalline materials.<br/><br/>[1] J. Teufel, L. Torresi, P. Reiser, P. Friederich, MEGAN: Multi-explanation Graph Attention Network, xAI Conference 2023, 338–360.<br/>[2] Z. Ying, D. Bourgeois, J. You, M. Zitnik, J. Leskovec, GNNExplainer: Generating Explanations for Graph Neural Networks, NeurIPS 2019.<br/>[3] H. Sturm, J. Teufel, K.A. Isfeld, P. Friederich, R.L. Davis, Mitigating Molecular Aggregation in Drug Discovery with Predictive Insights from Explainable AI, arXiv preprint arXiv:2306.02206 2024.

Keywords

nanoscale

Symposium Organizers

Kjell Jorner, ETH Zurich
Jian Lin, University of Missouri-Columbia
Daniel Tabor, Texas A&M University
Dmitry Zubarev, IBM

Session Chairs

Kjell Jorner
Jian Lin

In this Session