Learning Interactions in Complex Biological Systems

Daniel Faissol | 17-ERD-036


In this project, we developed novel methods for integrating simulation and data-driven methods to accelerate biomedical discovery. In particular, we exploited agent-based models of disease processes and deep reinforcement learning to identify potential novel, multi-drug, patient-specific, and adaptive therapeutic strategies. We used the Innate Immune Response agent-based model (IIRABM) to model sepsis as a demonstration of our approach. We first calibrated the agent-based model using novel stochastic optimization algorithms developed in this project, then used the calibrated model to identify optimal cytokine mediation strategies using deep reinforcement learning (DRL). The learned strategy achieved a dramatic reduction in mortality over a set of 500 simulated patients relative to standalone antibiotic therapy. Advantages of our approach are threefold: 1) the use of simulation allows exploration of therapeutic strategies beyond clinical practice and available data, 2) advances in DRL accommodate learning complex therapeutic strategies for complex biological systems, and 3) optimized treatments respond to a patient's individual disease progression over time, therefore capturing both differences across patients and the inherent randomness of disease progression within a single patient.

Impact on Mission

This project has motivated the use of DRL across Lawrence Livermore National Laboratory missions. DRL is revolutionizing artificial intelligence, surpassing both human and machine performance on problems involving complex, real-time decision making. However, DRL was previously limited to the context of games and robotics. This project initially demonstrated the power of DRL on science problems. In particular, this work has directly led to a Livermore effort to apply DRL to adaptive mesh refinement for finite element modeling, a core capability of NNSA and Livermore. The DRL expertise gained in this project points to additional DRL applications at Livermore within cybersecurity, biosecurity, energy, and artificial intelligence.

Publications, Presentations, Etc.

Desautels T., et al. 2019. "Controlling a Sepsis Simulation with PILCO, a Model-learning Controller." Data Science Institute Workshop. LLNL-POST-751650.

Petersen B., et al. 2017. "Learning Mechanisms and Control Strategies for Agent-Based Models." Postdoctoral Poster Symposium, Livermore, CA, June 2017. LLNL-POST-732981.

––– . 2018. "Deep Reinforcement Learning and Precision Medicine as a path toward precision medicine." CASIS 2018, Livermore, CA, May 2018. LLNL-PRES-751582.

––– . 2018. "Precision Medicine as a Control Problem: Using Simulation and Seep Reinforcement Learning to Discover Adaptive, Personalized Multi-Cytokine Therapy for Sepsis." ICML Workshop on Computational Biology, Stockholm, Sweden, 2018. LLNL-POST-795611.

––– . 2019. "Deep Reinforcement Learning and Simulation as a Path Toward Precision Medicine." Journal of Computational Biology 26, 6: 597-604. LLNL-JRNL-745693.