David Mebane1,2
West Virginia University1,KBR Wyle Services2
David Mebane1,2
West Virginia University1,KBR Wyle Services2
Neural networks (NNs) are powerful tools for machine learning, with stunning results in computer vision and large language models dominating the news. However NNs are often misapplied in scientific modeling contexts, in which the input-output space dimensionalities are small to moderate. There are numerous examples of other methods such as decision trees and Gaussian processes (GPs) outperforming NNs in both accuracy and inference speed for tabular estimation. A shift away from NNs in scientific modeling contexts promises faster and more accurate performance. A framework for scientific machine learning in which fast, decomposed GPs represent well-defined scientific functions has shown considerable promise, outperforming recurrent neural networks (such as LSTM) on multiple benchmark dynamic modeling tasks. These well-defined functions also present opportunities for multi-scale modeling, linking electronic and atomistic scales to device scales. Multiple applications in materials modeling for energy applications -- including in solid-state batteries and high-temperature CO2 electrolyzers -- will be presented.