Weike Ye1,Xiangyun Lei1,Amalie Trewartha1
Toyota Research Insitute1
Weike Ye1,Xiangyun Lei1,Amalie Trewartha1
Toyota Research Insitute1
During a closed-loop materials discovery process, a surrogate model is often used to form hypotheses about which new regions of parameter space to explore, and then more expensive experiments or higher-fidelity computations are performed to confirm or deny these hypotheses. The relatively high cost of re-training/fine-tuning surrogate models as high-fidelity data is acquired leads to a trade-off between accuracy and computational cost, which can in practice limit the ability to deploy state-of-the-art neural networks as surrogate models. In this work, we first propose a holistic pruning technique based on the Lottery Ticket Hypothesis that allows for structure optimization of neural networks (NN). We demonstrate that, for both fully connected NNs and graph-based convolutional NNs, we can find sparse sub-nets of the original models that can be trained faster to achieve commensurate or better accuracy and better generalizability with ~ 70 % or fewer weights. Subsequently, we combine the pruning technique with active learning frameworks for materials discovery to show that the additional pruning step reduces the total iterations required to locate all desired samples by up to 20% and decreases the retraining time of the NN-based agent by up to 40% in each iteration. We believe this acceleration allows for more rapid exploration of the parameter space, leading to faster identification and validation of new materials.