Apr 10, 2025
9:00am - 9:15am
Summit, Level 4, Room 422
Sarah Allec1,Maxim Ziatdinov1
Pacific Northwest National Laboratory1
Sarah Allec1,Maxim Ziatdinov1
Pacific Northwest National Laboratory1
Many traditional machine learning (ML) models, especially neural networks, excel at predicting material properties but often lack uncertainty quantification, which is critical when working with noisy or limited data and when performing active learning (AL)-driven materials or experimental design. Historically, Gaussian Processes (GPs) have been favored in these applications for their ability to provide robust uncertainty estimates. However, GPs struggle with systems featuring discontinuities and non-stationarities, common in physical science problems, as well as with high dimensional data. Bayesian neural networks (BNNs), in which every weight is replaced with a probabilistic distribution estimated with Markov Chain Monte-Carlo, offer a powerful alternative, combining the flexibility of neural networks with the ability to quantify prediction uncertainties. Here, we demonstrate the advantages of BNNs in materials science on several benchmark and real-world experimental datasets. Our findings show that BNNs
i) handle discontinuous and non-stationary data more accurately than GPs and
ii) enable more efficient active learning than GPs on small datasets. Furthermore, we have found that partially Bayesian neural networks (PBNNs) with only one or two probabilistic layers can achieve prediction accuracies comparable to fully Bayesian neural networks (FBNNs), offering the advantages of FBNNs at lower computational cost. Lastly, we investigate theory-informed transfer learning of BNNs by initializing the priors of a BNN trained on experimental data with the pre-trained weights of a deterministic neural network trained on simulation data. These results demonstrate the feasibility and potential of BNNs for driving advancements in non-trivial materials science problems with limited, complex datasets.