Apr 24, 2024
8:30am - 8:45am
Room 322, Level 3, Summit
Ajinkya Hire1,Jason Gibson1,Benjamin Geisler1,Philip Dee1,Oscar Barrera1,Peter Hirschfeld1,Richard Hennig1
University of Florida1
Ajinkya Hire1,Jason Gibson1,Benjamin Geisler1,Philip Dee1,Oscar Barrera1,Peter Hirschfeld1,Richard Hennig1
University of Florida1
The Eliashberg spectral function (α<sup>2</sup>F) is at the core of understanding the electron-phonon superconducting properties of a material. But calculating α<sup>2</sup>F from ab initio methods for a large number of materials is expensive, impeding the high-throughput search of novel superconductors. With machine learning models gaining prominence over the past decade for material properties, our work harnesses this potential for α<sup>2</sup>F prediction. In this presentation, I will present our expansive database of theoretically calculated high-quality α<sup>2</sup>F and introduce our state-of-the-art equivariant neural network models trained using it. This talk also elucidates the nuances of training models for predicting continuous properties such as α<sup>2</sup>F, focusing on innovative physics-inspired node embeddings to bolster model performance. For the α<sup>2</sup>F-derived properties like the electron-phonon coupling λ, and two moments of the frequency (〈ω<sub>log</sub>〉, and 〈ω<sup>2</sup>〉), our base model, relying solely on crystal structure, achieves a Pearson correlation coefficient of 0.3, 0.7, and 0.8. In comparison, integrating moderately expensive node embeddings increases the Pearson correlation coefficients of 0.5, 0.8, and 0.9 on the derived properties. An intriguing power-law relationship between model loss and training dataset size is observed, emphasizing the imperative for community-collaborated expansion of α<sup>2</sup>F databases.