Amir Barati Farimani1
Carnegie Mellon University1
Amir Barati Farimani1
Carnegie Mellon University1
Machine learning (ML) models have been widely successful in the prediction of material properties. However, large labeled datasets required for training accurate ML models are elusive and computationally expensive to generate. Recent advances in Self-Supervised Learning (SSL) frameworks capable of training ML models on unlabeled data mitigate this problem and demonstrate superior performance in computer vision and natural language processing. Drawing inspiration from the developments in SSL, we introduce multiple recent models that incorporate language models, SSL, and multimodal training to enhance the performance of material property prediction tasks. For example, we will sow that by sharing the pre-trained weights when fine-tuning the GNN for downstream tasks, we can significantly improve the performance of GNN on 14 challenging material property prediction benchmarks.