LLM4Fluid: Large Language Models as Generalizable Neural Solvers for Fluid Dynamics

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes LLM4Fluid, a novel framework that leverages pre-trained large language models as universal neural solvers for fluid dynamics. Addressing the limited generalization of conventional deep learning approaches—which typically require retraining for new flow conditions—LLM4Fluid introduces a physics-informed, decoupled reduced-order modeling strategy to map high-dimensional flow fields into a latent space. By integrating modal alignment with an autoregressive temporal prediction mechanism, the framework enables accurate, long-term forecasting of flow evolution across diverse scenarios without any retraining. Demonstrating zero-shot and in-context learning capabilities, LLM4Fluid achieves high predictive accuracy on a range of complex flows and significantly outperforms existing methods, highlighting its exceptional generalization performance.

Technology Category

Application Category

📝 Abstract
Deep learning has emerged as a promising paradigm for spatio-temporal modeling of fluid dynamics. However, existing approaches often suffer from limited generalization to unseen flow conditions and typically require retraining when applied to new scenarios. In this paper, we present LLM4Fluid, a spatio-temporal prediction framework that leverages Large Language Models (LLMs) as generalizable neural solvers for fluid dynamics. The framework first compresses high-dimensional flow fields into a compact latent space via reduced-order modeling enhanced with a physics-informed disentanglement mechanism, effectively mitigating spatial feature entanglement while preserving essential flow structures. A pretrained LLM then serves as a temporal processor, autoregressively predicting the dynamics of physical sequences with time series prompts. To bridge the modality gap between prompts and physical sequences, which can otherwise degrade prediction accuracy, we propose a dedicated modality alignment strategy that resolves representational mismatch and stabilizes long-term prediction. Extensive experiments across diverse flow scenarios demonstrate that LLM4Fluid functions as a robust and generalizable neural solver without retraining, achieving state-of-the-art accuracy while exhibiting powerful zero-shot and in-context learning capabilities. Code and datasets are publicly available at https://github.com/qisongxiao/LLM4Fluid.
Problem

Research questions and friction points this paper is trying to address.

fluid dynamics
generalization
spatio-temporal modeling
neural solvers
zero-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Fluid Dynamics
Reduced-Order Modeling
Modality Alignment
Zero-shot Learning
🔎 Similar Papers
No similar papers found.
Q
Qisong Xiao
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
X
Xinhai Chen
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
Qinglin Wang
Qinglin Wang
National University of Defense Technology
Parallel algorithmsHigh Performance ComputingDeep LearningMachine LearningGPU
X
Xiaowei Guo
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
B
Binglin Wang
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
W
Weifeng Chen
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
Zhichao Wang
Zhichao Wang
National University of Defense Technology
AI for CFDAI for MeshPINNMesh Optimization
Y
Yunfei Liu
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
R
Rui Xia
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
H
Hang Zou
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
G
Gencheng Liu
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
S
Shuai Li
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
J
Jie Liu
National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China; Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha 410073, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China