Robust Control of Scientific Simulations with Deep Reinforcement Learning

Daniel Faissol | 21-SI-001

Project Overview

In this project we explored advancing the state-of-the-art in deep reinforcement learning (DRL) to potentially achieve breakthrough advances in large-scale scientific simulations, precision medicine, and rapid response to novel pathogens. We chose three high impact Lawrence Livermore National Laboratory (LLNL) Science & Technology problems that are exemplars of the algorithmic challenges we will tackle. In Thrust area 1, we developed DRL algorithms that learn how to perform adaptive mesh refinement for finite element models. We introduced DynAMO, a multi-agent reinforcement learning paradigm for Dynamic Anticipatory Mesh Optimization to discover new local refinement policies that can anticipate and respond to future solution states by producing meshes that deliver more accurate solutions for longer time intervals. By applying DynAMO to discontinuous Galerkin methods for the linear advection and compressible Euler equations in two dimensions, we demonstrate that this new mesh refinement paradigm can outperform conventional threshold-based strategies while also generalizing to different mesh sizes, initial conditions, and remeshing and simulation times.

In Thrust 2, we developed a reinforcement-based approach to learn simple, sparse symbolic simulation control policies, enhancing both sparsity and interpretability that demonstrated effective adaptive multi-cytokine solutions on simulated sepsis patients.

Finally, in Thrust 3, we developed DRL algorithms to support rapid in silico design of antibodies for novel pathogens. We introduced Language Model-accelerated Deep Symbolic Optimization (LA-DSO) as a method to leverage language models to learn symbolic optimization solutions more efficiently. The application of LA-DSO in symbolic regression and computational antibody optimization showcased its ability to accelerate learning in both low-computation and real-world challenging scenarios. Additionally, the project proposed Multi-Fidelity Deep Symbolic Optimization (MF-DSO) for multi-fidelity symbolic optimization. This novel approach accounts for multiple fidelities and their associated costs, outperforming fidelity-agnostic and fidelity-aware baselines in Symbolic Regression and Antibody Optimization domains.

Mission Impact

This proposal directly supports LLNL’s core competency in High-Performance Computing, Simulation, and Data Science by developing approaches at the cutting edge of data science that enable higher fidelity, realistic and reliable science and engineering simulations. This line of research provides a basis for new avenues of research in reinforcement learning to augment simulation capabilities on numerous fronts. Adaptive mesh refinement capabilities are now common in LLNL simulation codes. Indeed, over the last several decades this has become a common feature of high-fidelity physics simulation codes across the DOE complex as well as in the DOD. The primary impact of this research is on the potential increase in efficiency of AMR simulations, especially on accelerated architectures where data motion is expensive relative to computation. Further, the research significantly impacts LLNL missions by advancing the understanding of deep reinforcement learning for adaptive therapeutic strategies in combating complex and dynamic disease processes. The proposed methodologies, such as LA-DSO and MF-DSO, open avenues for efficiently solving real-world problems and contribute to the Laboratory's capabilities in addressing novel biological threats.

Publications, Presentations, and Patents

Yang, Jiachen, Tarik Dzanic, Brenden Petersen, Jun Kudo, Ketan Mittal, Vladimir Tomov, Jean-Sylvain Camier, Tuo Zhao, Hongyuan Zha, Tzanio Kolev, Robert Anderson, and Daniel Faissol. Reinforcement learning for adaptive mesh refinement. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 5997–6014. PMLR, 25–27 Apr 2023.

Yang, Jiachen, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol, and Robert Anderson. Multi-agent reinforcement learning for adaptive mesh refinement. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’23, 2023.

Yang, Jiachen, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol, and Robert Anderson. Multi-agent reinforcement learning for adaptive mesh refinement. In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2023), pages 14–22, 2023.

Jacob F. Pettit, et al, "Learning sparse symbolic policies for sepsis treatment" (Presentation, Workshop: Interpretable Machine Learning in Healthcare at International Conference in the Machine Learning, Lawrence Livermore National Lab, Livermore, CA, 2021). No. LLNL-CONF-823601.

Silva, Felipe, et al. "Language Model-accelerated Deep Symbolic Optimization." Neural Computing and Applications, 2023.