Victor Leon1,Alexander New1,Michael Pekala1,Elizabeth Pogue1,Christopher Stiles1,2
Johns Hopkins University Applied Physics Laboratory1,Johns Hopkins University2
Victor Leon1,Alexander New1,Michael Pekala1,Elizabeth Pogue1,Christopher Stiles1,2
Johns Hopkins University Applied Physics Laboratory1,Johns Hopkins University2
The search for high temperature superconductors with mechanical flexibility is motivated by critical applications in healthcare (magnetic resonance imaging), MAGLEV vehicles, and reduction in power line transmission energy waste. The discovery of novel superconductors is hindered experimentally and computationally by the huge numbers of combinations of possible elements and crystal structures. In recent years, significant interest has been directed towards using machine learning and materials informatics for inverse design of materials. However, there is no standard way to represent materials for inverse design problems. An ideal material representation would keep the most predictive power for the design application relevant materials properties. In this study, we conduct a systematic evaluation of the predictive power of various materials representations (e.g. Roost, Magpie descriptors, element-fraction) with a variety of distance measures (e.g. Euclidean, earth movers distance). We compare their performance as inputs to a variety of machine learning algorithms (e.g. k-nearest neighbors regression, neural networks) to predict relevant superconductor materials properties (e.g. critical temperature, elastic modulus). We present a systematic comparison between these representation methods and algorithms for the application of superconductor property prediction.