TGLF-SINN: Deep Learning Surrogate Model for Accelerating Turbulent Transport Modeling in Fusion

📅 2025-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In tokamak whole-device turbulent transport simulations, the TGLF model suffers from high computational cost, while existing neural-network-based surrogate models require prohibitively large training datasets. To address these challenges, we propose the Spectral Information Neural Network (SINN): a physics-informed surrogate that extracts dominant transport-spectrum patterns via physics-guided feature engineering, enforces physical consistency through transport-spectrum regularization, and employs Bayesian active learning to efficiently select the most informative training samples. SINN achieves near-full-dataset accuracy using only 25% of the training data, accelerates inference by 45×, and significantly reduces the normalized root-mean-square error (LRMSE) compared to baseline surrogates. By integrating differentiability, high fidelity, computational efficiency, and strong generalizability, SINN provides a practical, scalable acceleration framework for large-scale integrated tokamak simulations.

Technology Category

Application Category

📝 Abstract
The Trapped Gyro-Landau Fluid (TGLF) model provides fast, accurate predictions of turbulent transport in tokamaks, but whole device simulations requiring thousands of evaluations remain computationally expensive. Neural network (NN) surrogates offer accelerated inference with fully differentiable approximations that enable gradient-based coupling but typically require large training datasets to capture transport flux variations across plasma conditions, creating significant training burden and limiting applicability to expensive gyrokinetic simulations. We propose extbf{TGLF-SINN (Spectra-Informed Neural Network)} with three key innovations: (1) principled feature engineering that reduces target prediction range, simplifying the learning task; (2) physics-guided regularization of transport spectra to improve generalization under sparse data; and (3) Bayesian Active Learning (BAL) to strategically select training samples based on model uncertainty, reducing data requirements while maintaining accuracy. Our approach achieves superior performance with significantly less training data. In offline settings, TGLF-SINN reduces logarithmic root mean squared error (LRMSE) by 12. 4% compared to the current baseline ase. Using only 25% of the complete dataset with BAL, we achieve LRMSE only 0.0165 higher than ase~and 0.0248 higher than our offline model (0.0583). In downstream flux matching applications, our NN surrogate provides 45x speedup over TGLF while maintaining comparable accuracy, demonstrating potential for training efficient surrogates for higher-fidelity models where data acquisition is costly and sparse.
Problem

Research questions and friction points this paper is trying to address.

Accelerating turbulent transport modeling in fusion devices
Reducing computational cost of whole device simulations
Minimizing training data requirements for neural surrogates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Principled feature engineering reduces prediction range complexity
Physics-guided regularization improves sparse data generalization
Bayesian Active Learning strategically selects training samples
Yadi Cao
Yadi Cao
University of California San Diego
Scientific Machine LearningNumerical PDEsComputational MechanicsFluid Dynamics
F
Futian Zhang
University of California, San Diego, La Jolla, CA 92093, USA
W
Wesley Liu
University of California, San Diego, La Jolla, CA 92093, USA
T
Tom Neiser
General Atomics, P.O. Box 85608, San Diego, CA 92186, USA
Orso Meneghini
Orso Meneghini
General Atomics
Plasma PhysicsFusionTokamaksDIII-DITER
L
Lawson Fuller
University of California, San Diego, La Jolla, CA 92093, USA
S
Sterling Smith
General Atomics, P.O. Box 85608, San Diego, CA 92186, USA
Raffi Nazikian
Raffi Nazikian
General Atomics
plasma physicsfusion energy
B
Brian Sammuli
General Atomics, P.O. Box 85608, San Diego, CA 92186, USA
Rose Yu
Rose Yu
Associate Professor, University of California, San Diego
Machine LearningComputational Sustainability