Neural operators are deep learning architectures designed to approximate operators, which are mappings between infinite-dimensional function spaces. They have been widely applied to solve problems involving partial differential equations, such as predicting solutions from given initial or boundary conditions. Despite their empirical success, some theoretical questions remain unresolved. In this talk, we will discuss the analysis of the error convergence and generalization; the results are valid for a broad class of widely used neural operators. These theoretical developments further motivate the design of distributed and federated learning algorithms that leverage the underlying structure of neural operator approximations to address two key challenges in practical applications: (1) handling heterogeneous and multiscale input functions, and (2) extending the framework to a multi-operator learning setting to enable generalization to previously unseen tasks. Numerical evidence regarding those applications will be presented.
Back to Sampling, Inference, and Data-Driven Physical Modeling in Scientific Machine Learning