MRS Meetings and Events

 

DS03.04.05 2023 MRS Fall Meeting

Scientific Understanding of Materials using Multi-Explanation Graph Attention Network

When and Where

Nov 28, 2023
2:45pm - 3:00pm

Sheraton, Second Floor, Liberty B/C

Presenter

Co-Author(s)

Pascal Friederich1,Jonas Teufel1,Luca Torresi1,Patrick Reiser1

Karlsruhe Institute of Technology1

Abstract

Pascal Friederich1,Jonas Teufel1,Luca Torresi1,Patrick Reiser1

Karlsruhe Institute of Technology1
Explainable artificial intelligence (XAI) methods are expected to improve trust during human-AI interactions, provide tools for model analysis and extend human understanding of complex scientific problems [1]. Explanation-supervised training allows to improve explanation quality by training self-explaining XAI models on ground truth or human-generated explanations. However, existing explanation methods have limited expressiveness and interoperability due to the fact that only single explanations in form of node and edge importance are generated. To that end we propose the novel multi-explanation graph attention network (MEGAN) [2]. Our fully differentiable, attention-based model features multiple explanation channels, which can be chosen independently of the task specifications. We first validate our model on a synthetic graph regression dataset. We show that for the special single explanation case, our model significantly outperforms existing post-hoc and explanation-supervised baseline methods. Furthermore, we demonstrate significant advantages when using two explanations, both in quantitative explanation measures as well as in human interpretability. Finally, we demonstrate our model's capabilities on multiple real-world datasets [3,4]. We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks and at the same time matches state-of-the-art graph neural networks in predictive performance, indicating that explanations and accuracy are not necessarily a trade-off.<br/>A interactive version of the trained model is available on https://megan.aimat.science<br/><br/>[1] Teufel, J., Torresi, L., Reiser, P. and Friederich, P., 2022. MEGAN: Multi-Explanation Graph Attention Network. arXiv preprint arXiv:2211.13236.<br/>[2] Krenn, M., Pollice, R., Guo, S.Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., dos Passos Gomes, G., Häse, F., Jinich, A., Nigam, A. and Yao, Z., 2022. On scientific understanding with artificial intelligence. Nature Reviews Physics, 4(12), pp.761-769.<br/>[3] Teufel, J., Torresi, L. and Friederich, P., 2023. Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies. arXiv preprint arXiv:2305.15961.<br/>[4] Sturm, H., Teufel, J., Isfeld, K.A., Friederich, P. and Davis, R.L., 2023. Mitigating Molecular Aggregation in Drug Discovery with Predictive Insights from Explainable AI. arXiv preprint arXiv:2306.02206.

Keywords

organic

Symposium Organizers

James Chapman, Boston University
Victor Fung, Georgia Institute of Technology
Prashun Gorai, National Renewable Energy Laboratory
Qian Yang, University of Connecticut

Symposium Support

Bronze
Elsevier B.V.

Publishing Alliance

MRS publishes with Springer Nature