🤖 AI Summary
This work addresses the limitations of existing parameter-efficient fine-tuning (PEFT) methods when adapting physics-informed neural networks under out-of-distribution conditions, where such approaches often disrupt the underlying physical manifold structure and introduce redundancy. Conventional SVD-based techniques further suffer from subspace locking and truncation of high-frequency spectral modes. To overcome these challenges, the authors propose MODE, a lightweight micro-architecture that decomposes physical evolution into complementary mechanisms via dense mixing of dominant spectral components within an orthogonal basis, a residual spectral reactivation mechanism, and affine Galilean decoupling. This design effectively activates high-frequency modes while preserving the manifold structure, using only a minimal number of trainable parameters. Evaluated on tasks including 1D convection–diffusion–reaction equations and 2D Helmholtz equations, MODE substantially outperforms existing PEFT methods and demonstrates superior out-of-distribution generalization.
📝 Abstract
Physics-informed neural networks (PINNs) have achieved notable success in modeling dynamical systems governed by partial differential equations (PDEs). To avoid computationally expensive retraining under new physical conditions, parameterized PINNs (P$^2$INNs) commonly adapt pre-trained operators using singular value decomposition (SVD) for out-of-distribution (OOD) regimes. However, SVD-based fine-tuning often suffers from rigid subspace locking and truncation of important high-frequency spectral modes, limiting its ability to capture complex physical transitions. While parameter-efficient fine-tuning (PEFT) methods appear to be promising alternatives, applying conventional adapters such as LoRA to P$^2$INNs introduces a severe Pareto trade-off, as additive updates increase parameter overhead and disrupt the structured physical manifolds inherent in operator representations. To address these limitations, we propose Manifold-Orthogonal Dual-spectrum Extrapolation (MODE), a lightweight micro-architecture designed for physics operator adaptation. MODE decomposes physical evolution into complementary mechanisms including principal-spectrum dense mixing that enables cross-modal energy transfer within frozen orthogonal bases, residual-spectrum awakening that activates high-frequency spectral components through a single trainable scalar, and affine Galilean unlocking that explicitly isolates spatial translation dynamics. Experiments on challenging PDE benchmarks including the 1D Convection--Diffusion--Reaction equation and the 2D Helmholtz equation demonstrate that MODE achieves strong out-of-distribution generalization while preserving the minimal parameter complexity of native SVD and outperforming existing PEFT-based baselines.