Abstract
Derivative-Informed Training of Neural Operators on the Fly
Shancong Mou
University of Minnesota, Twin Cities
Recently, training deep neural operators with derivative information—often using offline-generated derivative pairs—has shown clear benefits in both the pretraining stage and downstream PDE-constrained optimization problems. In this talk, I will discuss recent developments in on-the-fly derivative-informed training, including how to generate and incorporate derivative information during training, what important lessons we have learned, and how these insights can be used to further improve downstream PDE-constrained optimization.
No video available