Apr 25, 2024
5:00pm - 7:00pm
Flex Hall C, Level 2, Summit
Vineeth Venugopal1,Elsa O1
Massachusetts Institute of Technology1
Vineeth Venugopal1,Elsa O1
Massachusetts Institute of Technology1
The application of Open Large Language Models (LLMs) in material science is revolutionizing the traditional methodologies employed in Named Entity Recognition (NER), classification, and information extraction tasks. This study aims to explore the expansive capabilities and performance limitations of LLMs in the realm of material science, underlining their role in the creation of automated databases via Retrieval-Augmented Generation (RAG) pipelines and probing their behavior through the examination of activation functions.<br/>LLMs are increasingly being used to automate the extraction of valuable insights from the expansive corpus of material science literature. They demonstrate high efficiency in NER tasks, effectively identifying and categorizing terminologies, material properties, and synthesis parameters. These capabilities extend to classification tasks, where LLMs can sort documents or data points based on pre-defined categories, such as material type, structural characteristics, or application domains.<br/>Despite these promising features, LLMs are not without their limitations. One critical issue pertains to their tendency to produce spurious or "hallucinated" outputs. To mitigate this, our study incorporates ensemble methods and evaluates the outputs through metrics like F1-score for classification tasks and ROGUE-L score for text generation tasks.<br/>The RAG pipeline is a notable development, automating the database creation process by combining the strengths of both retrieval and generation modules. We particularly focus on the application of RAG in creating a high-throughput, structured database of material structure-property-processing parameters. The RAG pipeline leverages LLMs to encode text documents into a large vector database. Queries, specified in natural language, are transformed into vector embeddings, followed by a vector similarity search. The output is then aggregated and structured, serving as a robust database for further scientific research.<br/>To deepen our understanding of LLM behavior, we also investigate the activation functions within these neural networks. By scrutinizing how different layers and nodes respond to specific input types, we gain valuable insights into the model's interpretability and reliability. This examination allows us to optimize the model's performance further and provides a diagnostic tool for understanding the complexities inherent in LLMs.<br/>In conclusion, this study offers a comprehensive evaluation of the utility and limitations of Open Large Language Models in material science. It elaborates on their role in automating complex NER, classification, and information extraction tasks, their implementation in RAG pipelines for database creation, and the insights gained from analyzing their activation functions. As material science stands to gain significantly from these advancements, understanding and optimizing LLMs can pave the way for more efficient and accurate research methodologies.