FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fourier Neural Operators (FNOs) suffer from high-frequency information loss, modeling distortion, and inefficiency in solving high-dimensional, high-resolution, or long-horizon PDEs due to fixed frequency-domain truncation. Method: We propose a Dynamic Frequency-Enhanced Sparse Mixture-of-Experts (MoE) framework. Its core innovation is the first “low-frequency pretraining–high-frequency progressive fine-tuning” paradigm, coupled with a frequency-domain sparse up-cycling architecture that enables adaptive sparse activation and weight expansion over Fourier coefficients. The method supports both regular and irregular grids. Contribution/Results: Experiments demonstrate up to 16.6% improvement in PDE solution accuracy, with model parameters reduced to only 2.1% of the original FNO (47.3× compression). The framework further enhances long-horizon prediction stability and cross-architecture generalizability.

Technology Category

Application Category

📝 Abstract
Fourier Neural Operators (FNO) have emerged as promising solutions for efficiently solving partial differential equations (PDEs) by learning infinite-dimensional function mappings through frequency domain transformations. However, the sparsity of high-frequency signals limits computational efficiency for high-dimensional inputs, and fixed-pattern truncation often causes high-frequency signal loss, reducing performance in scenarios such as high-resolution inputs or long-term predictions. To address these challenges, we propose FreqMoE, an efficient and progressive training framework that exploits the dependency of high-frequency signals on low-frequency components. The model first learns low-frequency weights and then applies a sparse upward-cycling strategy to construct a mixture of experts (MoE) in the frequency domain, effectively extending the learned weights to high-frequency regions. Experiments on both regular and irregular grid PDEs demonstrate that FreqMoE achieves up to 16.6% accuracy improvement while using merely 2.1% parameters (47.32x reduction) compared to dense FNO. Furthermore, the approach demonstrates remarkable stability in long-term predictions and generalizes seamlessly to various FNO variants and grid structures, establishing a new ``Low frequency Pretraining, High frequency Fine-tuning'' paradigm for solving PDEs.
Problem

Research questions and friction points this paper is trying to address.

Addresses high-frequency signal sparsity in Fourier Neural Operators
Reduces high-frequency signal loss from fixed truncation in PDEs
Enhances accuracy and efficiency for high-resolution and long-term predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive training framework for frequency enhancement
Sparse upward-cycling strategy in frequency domain
Low frequency pretraining, high frequency fine-tuning paradigm
🔎 Similar Papers
No similar papers found.
T
Tianyu Chen
SKLCCSE, School of Computer Science and Engineering, Beihang University, China
Haoyi Zhou
Haoyi Zhou
Associate Professor, Beihang University
Machine LearningData MiningTime-series
Y
Ying Li
SKLMIP, School of Computer Science, Peking University, China
H
Hao Wang
SKLMIP, School of Computer Science, Peking University, China
Z
Zhenzhe Zhang
SKLMIP, School of Computer Science, Peking University, China
Tianchen Zhu
Tianchen Zhu
Beihang University
Shanghang Zhang
Shanghang Zhang
Peking University
Embodied AIFoundation Models
J
Jianxin Li
SKLCCSE, School of Computer Science and Engineering, Beihang University, China