Dec 5, 2024
9:30am - 9:45am
Hynes, Level 2, Room 210
Izumi Takahara1,Kiyou Shibata1,Teruyasu Mizoguchi1
The University of Tokyo1
In recent years, significant progress has been made in developing deep learning-based representation learning and generative methods for materials, accelerating materials exploration and design. For the discovery of inorganic materials with desired properties, inverse design of crystal structures using a generative model has emerged as a promising approach [1]. Crystal structures are composed of multiple types of variables including lattice vectors, atomic species, and atomic coordinates, allowing for various approaches to representing and generating them. Recently, efforts have been made to encode and learn the representation of 3D crystals using Transformers [2,3]. In this presentation, we introduce our approach to leveraging Transformer-based crystal encoding for crystal generation mainly in diffusion models, where we explore suitable conditioning methods for target-aware materials generation [4]. Our model demonstrated high versatility, achieving comparable or superior success rates across various datasets compared to previously reported approaches. We will present the performance of our generative model with a Transformer backbone for the inverse design of crystals, and discuss the implications derived from our models.<br/><br/>(1) C. Zeni et al., arXiv:2312.03687 (2023). (2) K. Yan et al., In Proc. NeurIPS (2022). (3) T. Taniai et al., arXiv:2403.11686 (2024). (4) I. Takahara et al., arXiv:2406.09263 (2024).