December 1 - 6, 2024
Boston, Massachusetts
Symposium Supporters
2024 MRS Fall Meeting & Exhibit
MT04.01.04

LLaMP—Large Language Model Made Powerful for High-Fidelity Materials Knowledge Retrieval and Distillation

When and Where

Dec 2, 2024
11:15am - 11:30am
Hynes, Level 2, Room 210

Presenter(s)

Co-Author(s)

Yuan Chiang1,2,Elvis Hsieh1,Chia-Hong Chou3,Janosh Riebesell4,2

University of California, Berkeley1,Lawrence Berkeley National Laboratory2,Foothill College3,University of Cambridge4

Abstract

Yuan Chiang1,2,Elvis Hsieh1,Chia-Hong Chou3,Janosh Riebesell4,2

University of California, Berkeley1,Lawrence Berkeley National Laboratory2,Foothill College3,University of Cambridge4
Reducing hallucinations in Large Language Models (LLMs) is imperative for use in the sciences, where reliability and reproducibility are crucial. However, LLMs inherently lack long-term memory, making it a nontrivial, ad hoc, and inevitably biased task to fine-tune them on domain-specific literature and data. Here, we introduce LLaMP, a multimodal retrieval-augmented generation (RAG) framework of hierarchical reasoning-and-acting (ReAct) agents that can dynamically and recursively interact with computational and experimental data on Materials Project (MP) and run atomistic simulations via a high-throughput workflow interface. Without fine-tuning, LLaMP demonstrates a strong ability to use tools to comprehend and integrate various modalities of materials science concepts, fetch relevant data stores on the fly, process higher-order data (such as crystal structure and elastic tensor), and streamline complex tasks in computational materials and chemistry. We propose a simple metric combining uncertainty and confidence estimates to evaluate the self-consistency of responses by LLaMP and vanilla LLMs. Our benchmark shows that LLaMP effectively mitigates the intrinsic bias in LLMs, counteracting the errors in bulk moduli, electronic bandgaps, and formation energies that seem to derive from mixed data sources. We also demonstrate LLaMP's capability to edit crystal structures and run annealing molecular dynamics simulations using pre-trained machine-learning force fields. The framework offers an intuitive and nearly hallucination-free approach to exploring and scaling materials informatics and establishes a pathway for knowledge distillation and fine-tuning other language models.

Symposium Organizers

Kjell Jorner, ETH Zurich
Jian Lin, University of Missouri-Columbia
Daniel Tabor, Texas A&M University
Dmitry Zubarev, IBM

Session Chairs

Kjell Jorner
Jian Lin

In this Session