Brian Van Essen | 20-FS-041
Project Overview
Next-generation deep-neural-network hardware architectures provide an opportunity to accelerate emerging cognitive simulation scientific workflows. We explored the use of two accelerator platforms—the Cerebras CS-1 and SambaNova SN10-8—on two exemplar applications that represent the two extremes of cognitive simulation—the Hermit and MaCC models. The Hermit model is a fast surrogate model that is executed directly within a multi-physics code computing loop and demands low-latency responses on small numbers of samples per request. The MaCC model aggregates the output of many multi-physics simulations and trains a generative model whose output can be used for computational steering and monitoring of ongoing simulations.
Preliminary results from this feasibility study show promising opportunities for using both accelerator architectures for these workloads. Modeling work shows disaggregated system design is feasible for cognitive simulation workloads. Challenges with initial deployment and the evolving capabilities of the vendor's software stack prevented a complete analysis of these new, enhanced workflows within the scope of this feasibility study. However, this work has positioned Lawrence Livermore National Laboratory at the forefront of developing an accelerated workflow for cognitive simulation using disaggregated accelerator hardware.
Mission Impact
Our work directly ties into and builds the Laboratory's high-performance computing, simulation, and data science core competency, and aims to improve capabilities in Livermore's stockpile stewardship mission focus area. Our evaluation of new artificial intelligence accelerators shows they significantly improve the performance of key machine-learning kernels important to the Laboratory's initiative in cognitive simulation, including for analysis of inertial confinement fusion.