Plasma Turbulence in TensorFlow: Reduced Models and Optimization
Victor Artigues
Max-Planck-Institut
Tokamak Theory
Simulating plasma turbulence is a computationally challenging task due to the complex interplay of multi-scale dynamics. To address this challenge, we explore the use of artificial intelligence (AI) together with physics models to develop cheaper, lower-fidelity models that can accelerate simulations. In this talk, we will present our research combining two complementary approaches. First, we will discuss the use of convolutional neural networks (CNNs) to learn closure terms in large eddy simulations of the 2D Hasegawa-Wakatani model, enabling efficient and accurate predictions of plasma behavior beyond the training range. Our CNN-based model demonstrates strong generalization capabilities, accurately predicting particle flux up to a factor of 5 outside the training range. We will also address the challenge of initialization in machine-learning-accelerated plasma simulations by leveraging existing simulations at known parameters to initialize simulations at unseen parameters.
Next, we will present our ongoing work on extending this framework to the full gyro-kinetic model, leveraging a novel differentiable flux-tube gyrokinetic code implemented in TensorFlow. This code enables the computation of gradients of key plasma quantities with respect to input parameters, facilitating powerful new methodologies such as profile prediction via gradient descent optimization. We will demonstrate the versatility and potential of our approach with applications to simplified and complex profile predictions, showcasing the transformative potential of differentiable physics for fusion research.