How can increasing computational power be used to enhance materials discovery and improve the design of materials? Traditional approaches have been iterative experiments and therefore slow. While much progress has been made in recent years (with high-throughput experiments), it still takes decade or more to optimize materials for specific technological applications as materials design is a complex, multi-scale, multi-physics optimization problem, and the data needed to make informed choices usually are incomplete or not available. The problem is especially acute in the case of “soft” materials (plastics, liquid crystals, and complex fluids) that are now ubiquitous in today’s world. Although theory blossomed in the 20th century, its actual use in the discovery of new materials is still limited. This critical issue has been identified by the USA’s Materials Genome initiative: “The primary problem is that current predictive algorithms do not have the ability to model behavior and properties across multiple spatial and temporal scales; for example, researchers can measure the atomic vibrations of a material in picoseconds, but from that information they cannot predict how the material will wear down over the course of years.” The same report states that the change in methodology from a fragmented, experimentally-based approach to an integrated, theory and data-led approach is one of the engineering grand challenges of the 21st century. This symposium seeks to promote the integration of theory and experimental data into large-scale numerical simulations that span across time and length scales, as well as combine multiple physics for materials discovery and co-design (which is defined as optimization with feedback mechanisms to integrate experimental and simulation data).
There are several reasons why this is timely: both, the computational power of the ordinary user is increasing, and high performance computing is racing ahead. Petascale (1015 FLOPS) machines are becoming common and we are well on our way to exascale machines. This level of parallelism can be used to solve scientific problems in the optimal way. Examples include using multiscale modeling, where models at different spatial and temporal scales are combined, either statically or dynamically. Other examples include High Throughput Computing (HTC), where a very large database of simulated properties of existing and hypothetical materials is constructed, and is subsequently intelligently interrogated in search of materials with the desired properties. This leads to data analytics (such as inference and machine learning algorithms), which can be used to predict properties, either from large sets of experimental data or, alternatively, from modeling results. In the case of materials co-design, feedback loops can increase the accuracy and the breadth of discovery.
One of the major challenges is simulating materials behavior at the mesoscale. While we are able to model the very small (atomistic) and the very large (continuum) very successfully, the mesoscale (0.01—100 micrometers) is where collective behavior becomes apparent and time and length scales of associated phenomena become highly dependent. A simulation at this spatio-temporal scale is required for "bottom-up" design, where molecular self-assembly is used as a fabrication tool. Another challenge is how to include advances in experimental characterization and data into computational materials design, such as temporally and spatially resolved in situ measurements. This topical list for this symposium reflects the challenges in soft-matter, complex fluid, and composite organic-inorganic materials design, from the multi-scale, multi-physics difficulties in simulation to the opportunities with increasing computational resources, and the need to integrate with data analytics and experiment.