Dec 3, 2024
4:30pm - 5:00pm
Hynes, Level 2, Room 209
Rama Vasudevan1,Aditya Vatsavai1,Yongtao Liu1,Sumner Harris1
Oak Ridge National Laboratory1
Rama Vasudevan1,Aditya Vatsavai1,Yongtao Liu1,Sumner Harris1
Oak Ridge National Laboratory1
Autonomous experiments (AE) offer a potential to dramatically improve the efficiency of the growth and discovery of new materials, as well as improve the efficiency of experiments by enabling focusing of efforts on only those experiments that are statistically likely to yield improved understandings or optimized properties. Traditional AE has relied heavily on the use of Bayesian Optimization (BO) methods, which are extremely useful for optimizing targeted properties when faced with limited experimental data and high uncertainty.<br/><br/>Here, we will explore methods that extend beyond traditional BO, to explore whether the error signal in a data-driven or physics-based model can be a useful target to minimize. This follows principles derived from curiosity-driven reinforcement learning, but are applied within a more classical optimization setting. We explore this concept via creating a data-driven model that takes as input features of image patches, and predicts whole spectra, and then utilize an agent where the goal is to predict the error of this original data-driven model to drive experimental measurements. We discuss the similarities and differences with the traditional deep kernel learning approach. Next, we then explore the use of continuous control via model-based predictive methods. These methods are useful in cases where continuous control must be exerted over a dynamical system, and for which a dynamics model can be postulated. Finally, I will touch on the ability to merge physics-based simulations with experiments with RL based approaches. Together, this suite of algorithms can be used in autonomous laboratories to extend beyond traditional BO, incorproate physics-based knowledge, and attempt to maximize physics discovery, as opposed to any specific materials objective.