Transfer of Deep Reinforcement Learning from Simulation to the Road to Reduce Automotive Fuel Consumption
Nicholas Killingsworth | 21-FS-043
There are two objectives for this feasibility study. The first is to establish if it is possible to use machine learning to predict the heat release behavior of a premixed flame given the turbulent velocity field just before ignition of the fuel-air mixture using simulation tools. If it is not possible to learn the expected flame propagation behavior in a simplified simulation environment, then it is unlikely that active control will be able to improve the efficiency of the unstable combustion processes in lean and dilute engines. The second objective is to assess the ability of recent advances in machine learning to allow a combustion controller to efficiently adapt to new operating conditions.
To work towards these goals, a two-dimensional turbulent reacting flow simulation with combustion chemistry is used. This simulation is used to show that in a supervised learning setting the prediction error of the heat release is lower when information about the velocity flow field is included than without. This indicates that a neural network can learn something useful from the turbulent flow field allowing it to better predict flame propagation and associated heat release.
Given that information about the velocity flow field can aid in predicting heat release, the use of reinforcement learning (RL) is explored to provide active control. RL is a promising machine learning method that requires extensive training on the system of interest to learn effective control policies. If the system of interest is resource intensive or there are safety issues associated with learning on the system, it can be advantageous to learn on a computationally inexpensive model that exhibits similar behavior to the system of interest and transfer the learned policy. Meta-learning is even more promising than traditional transfer learning in that it learns how to best optimize its network during the initial training process to allow it to adapt to new tasks in only a few runs, reducing the number of computationally expensive simulations that need to be run. Meta-learning methods are compared to more traditional transfer learning methods by applying them to the 2-D turbulent reacting flow simulation with a one-step chemistry model. The trained control policy is then transferred to a more complex simulation with a multi-step chemistry model. The methods developed can be applied to the problem of reducing cyclic-variability in dilute spark-ignited engines along with other inherently unstable combustion processes to increase their efficiency. Furthermore, these methods have the potential to rapidly deploy new engine controllers, possibly on the fly, to take advantage of low lifecycle carbon fuels without a costly redesign, which would increase the rate of decarbonization.
This work leveraged and advanced Lawrence Livermore National Laboratory's (LLNL's) core competencies in high-performance computing, simulation, and data science. The project used high-performance computing and high-fidelity simulations of turbulent reacting flows along with state-of-the art data science techniques and resulted in advances in the field of deep learning for practical engineering applications. The results are relevant to LLNL's Mission Focus Area of energy and climate security. Advanced control of reacting turbulent flows can lead to improved internal combustion engine efficiency and reduce the use of fossil fuels to decarbonize transportation.