ForwardProp is a Novel Gradient-Free Learning Paradigm for Integrating Deep Neural Nets with Scientific Simulations

James Diffenderfer | 23-ERD-030

Executive Summary

We propose to use gradient-free training to develop a learning paradigm capable of training Simulation-coupled-Machine-Learning models to achieve high performance while enabling ease of use with any existing simulation code. This project will provide drastically increased simulation accuracy and speed providing a deeper understanding of broad ranges of complex science applications. 

Publications, Presentations, and Patents

Bartoldson, Brian R., Bhavya Kailkhura, and Davis Blalock. 2023. “Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities.” Journal of Machine Learning Research.

Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, and Jihun Hamm, “On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization” (Presentation, ICML AdvML-Frontiers Workshop, Honolulu, HI, 2023).

Aochuan Chen, Yimeng Zhang, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, and Sijia Liu, “DeepZero: Scaling Up Zeroth-Order Optimization for Deep Model Training” (Poster Presentation, Monterey Data Conference, Monterey, CA, Aug 2023).