Apr 9, 2025
2:00pm - 2:15pm
Summit, Level 4, Room 422
Sakib Matin1,Alice Allen1,Emily Shinkle1,Yulia Pimonova1,Aleksandra Pachalieva1,Galen Craven1,Benjamin Nebgen1,Justin Smith2,Richard Messerly1,Ying Wai Li1,Sergei Tretiak1,Kipton Barros1,Nicholas Lubbers1
Los Alamos National Laboratory1,Nvidia2
Sakib Matin1,Alice Allen1,Emily Shinkle1,Yulia Pimonova1,Aleksandra Pachalieva1,Galen Craven1,Benjamin Nebgen1,Justin Smith2,Richard Messerly1,Ying Wai Li1,Sergei Tretiak1,Kipton Barros1,Nicholas Lubbers1
Los Alamos National Laboratory1,Nvidia2
Machine learning interatomic potentials (MLIPs) are revolutionizing molecular dynamics (MD) simulations, which are ubiquitous in chemistry and materials modelling. Recent MLIPs have tended towards more complex architectures and larger datasets. The resulting increase in computational and memory costs may prohibit large scale MD simulations. Here, we present a teacher-student training framework, where the latent knowledge from the teacher (atomic energies) is used to augment the students' training to improve the accuracy at no extra computational cost during inference. The light-weight student MLIPs have faster MD speeds at a fraction of the memory footprint. Additionally, we show that student MLIPs can surpass the accuracy of the teacher models, especially, when using the knowledge from an ensemble of teachers. This work highlights a practical method to train more accurate MLIPs using existing data sets and to reduce the resources required for large scale MD simulations.