Apr 26, 2024
4:00pm - 4:30pm
Room 320, Level 3, Summit
Daniel Schwalbe-Koda1
UCLA1
Recent advances in machine learning (ML) interatomic potentials (IPs) allow density functional theory (DFT) calculations to be bypassed with models that balance high accuracy and relatively low computational cost. However, MLIPs can be unreliable in regions of the configuration space not represented in the training data, which often hinders their use as universal predictors when modeling high-complexity systems. In this talk, I will describe how understanding robust generalization in ML can improve the development of next-generation ML models for materials simulation. First, I will demonstrate how extrapolation in atomistic systems can be rigorously defined in a model-free approach, thus without surrogate metrics such as variance of predictions. Then, I will describe how deep learning theory can be used to improve the generalization of MLIPs. These results are used to model several problems in materials simulation, from the thermodynamic of phase transformations to coverage effects in catalysis. In combination with automated workflows for combinatorial data generation, robustness in ML models can help drive the field towards the development of universal MLIPs towards length and time scales not accessible by ground truth calculations.