Physics-based simulations play a vital role in many scientific, engineering, and national security domains, including energy infrastructure, atmospheric sciences, and molecular dynamics. They are frequently critical for assessing risk and exploring “what if” scenarios, which require running models many times. Emulators (also known as surrogate models) are models trained to mimic numerical simulations at a much lower computational cost, particularly for parameters or inputs that have not been simulated. In this talk, I will describe new insights and methodologies for two classes of emulators. First, we will examine data-driven emulators, which learn to mimic a black-box simulator or PDE solver using training samples. Existing methods in this space struggle to emulate chaotic systems in which small perturbations in initial conditions cause trajectories to diverge at an exponential rate. In this setting, emulators trained to minimize squared error losses, while capable of accurate short-term forecasts, often fail to reproduce statistical or structural properties of the dynamics over longer time horizons and can yield degenerate results. I will describe an alternative framework based on contrastive learning designed to preserve invariant measures of chaotic attractors that characterize the time-invariant statistical properties of the dynamics. Second, we will explore physics-informed neural networks used to solve known differential equations (without using training data) in the context of numerical simulation of a time-evolving Schrödinger equation inspired by generative models. I will describe an approach that adapts to the latent low-dimensional structure of the problem, highlighting how physics-informed neural networks can yield substantial computational speedups. This is joint work with Ruoxi Jiang, Elena Orlova, Aleksei Ustimenko, and Peter Y. Lu.