Apr 23, 2024
11:00am - 11:15am
Room 320, Level 3, Summit
Sajad Hashemi1,Michael Guerzhoy1,2,Noah Paulson3
University of Toronto1,Li Ka Shing Knowledge Institute, St. Michael's Hospital2,Argonne National Laboratory3
Sajad Hashemi1,Michael Guerzhoy1,2,Noah Paulson3
University of Toronto1,Li Ka Shing Knowledge Institute, St. Michael's Hospital2,Argonne National Laboratory3
Over the past decade, generative machine learning has found application in the design of materials with tailored properties. Generative latent-variable models represent high-dimensional data in low-dimensional spaces. The original data is n-dimensional, for example, 2-D/3-D image-based representations of materials microstructure with k pixels/voxels. New plausible examples of image-based representations can be generated from points in the low-dimensional space. The variational autoencoder (VAE) is a popular latent-variable model. The VAE is learned by optimizing the L2 distance in data space (2-D/3-D) between real and generated examples (in addition to regularization). This is suboptimal, since materials microstructure is stochastic. Therefore, image-based representations should be considered similar if their statistical properties are similar rather than just if they are close in Euclidean data space. We develop a novel VAE architecture to prioritize statistical representations of materials microstructure and the generation of statistically similar microstructure examples from a single location in the latent space. A successful implementation of this architecture would greatly aid materials development and optimization through iterative materials simulations. This novel capability will be demonstrated on both synthetic and natural microstructure datasets.