π€ AI Summary
This work proposes DynFormer, a novel neural operator that integrates dynamical priors to address the high computational cost of traditional numerical methods for solving high-dimensional multiscale partial differential equations (PDEs). Existing Transformer-based approaches often neglect the inherent scale separation in physical fields, leading to redundant and inefficient global attention mechanisms. DynFormer is the first to incorporate the principle of physical scale separation into the Transformer architecture by employing spectral embedding, Kronecker-structured attention, and a hybrid localβglobal transformation, thereby assigning dedicated modules to distinct scales. Evaluated on four PDE benchmarks, the method achieves up to a 95% reduction in relative error, substantially lowers GPU memory consumption, and demonstrates high accuracy and stability in long-term evolution tasks.
π Abstract
Partial differential equations (PDEs) are fundamental for modeling complex physical systems, yet classical numerical solvers face prohibitive computational costs in high-dimensional and multi-scale regimes. While Transformer-based neural operators have emerged as powerful data-driven alternatives, they conventionally treat all discretized spatial points as uniform, independent tokens. This monolithic approach ignores the intrinsic scale separation of physical fields, applying computationally prohibitive global attention that redundantly mixes smooth large-scale dynamics with high-frequency fluctuations. Rethinking Transformers through the lens of complex dynamics, we propose DynFormer, a novel dynamics-informed neural operator. Rather than applying a uniform attention mechanism across all scales, DynFormer explicitly assigns specialized network modules to distinct physical scales. It leverages a Spectral Embedding to isolate low-frequency modes, enabling a Kronecker-structured attention mechanism to efficiently capture large-scale global interactions with reduced complexity. Concurrently, we introduce a Local-Global-Mixing transformation. This module utilizes nonlinear multiplicative frequency mixing to implicitly reconstruct the small-scale, fast-varying turbulent cascades that are slaved to the macroscopic state, without incurring the cost of global attention. Integrating these modules into a hybrid evolutionary architecture ensures robust long-term temporal stability. Extensive memory-aligned evaluations across four PDE benchmarks demonstrate that DynFormer achieves up to a 95% reduction in relative error compared to state-of-the-art baselines, while significantly reducing GPU memory consumption. Our results establish that embedding first-principles physical dynamics into Transformer architectures yields a highly scalable, theoretically grounded blueprint for PDE surrogate modeling.