Simon Lapointe | 20-FS-039
Effective control of turbulent flows can positively impact many practical applications. Reductions in turbulent skin friction on surfaces such as airplanes, ships, road vehicles, and pipes can lead to significant benefits. For this reason, flow control strategies have been the subject of much research. We assessed the potential of reinforcement learning (RL) for closed-loop active control of turbulent channel flow. Turbulent channel flow was chosen as the main test case for its relevance to practical applications, the availability of control benchmarks, and the relatively low computational cost. A high-fidelity, computational fluid dynamics simulation code was used to accurately and efficiently simulate the turbulent flow, and state-of-the-art machine learning techniques were employed to learn the control strategy. It was found that a multi-agent RL approach was better suited to exploit the large amount of data in turbulent flows than commonly used single-agent RL approaches. Cooperating RL agents observing local flow velocities and controlling the local wall boundary condition were able to learn a simple control strategy that reduced drag by more than 25 percent. The RL control strategy is similar to a well known, physics-inspired control strategy. This is a promising result as it indicates that RL can be used to control turbulent flows and can potentially serve as an alternative to supervised learning for turbulent flow analysis and turbulence modeling. The results are expected to help direct future research efforts in turbulent flow control and turbulence modeling in real-world applications, such as drag reduction on vehicles and control of cycle-to-cycle variations in internal combustion engines.
This work leveraged and advanced Lawrence Livermore National Laboratory's core competency in high-performance computing, simulation, and data science. The project used high-performance computing and high-fidelity simulations of turbulent flows, along with state-of-the art data science techniques, and resulted in advances in the field of deep learning for practical engineering applications. Furthermore, project results are relevant to the Laboratory's mission focus area of energy security and climate resilience, as advances in flow control for drag reduction can lead to enhanced vehicle energy efficiency and reduce the use of fossil fuels.