🤖 AI Summary
Implicit Neural Representations (INRs) struggle to accurately model high-frequency features—such as sharp discontinuities and localized oscillations—in multi-solution PDE fields, especially when a shared network jointly represents multiple solutions, due to spectral bias and limited generalization capacity.
Method: We propose Global Fourier Modulation (GFM), which explicitly injects frequency-domain priors at each INR layer, and introduce PDEfuncta—a meta-learning framework integrating Fourier-basis reparameterization, low-dimensional latent encoding, and spectrum-aware modeling.
Contribution/Results: PDEfuncta enables a single network to efficiently represent multiple PDE solution fields with zero-shot cross-task transferability. It unifies forward solving and inverse parameter estimation without fine-tuning. Evaluated on multiple PDE benchmarks, it significantly improves reconstruction fidelity of high-frequency structures while overcoming spectral bias and enhancing generalization across diverse solution manifolds.
📝 Abstract
Scientific machine learning often involves representing complex solution fields that exhibit high-frequency features such as sharp transitions, fine-scale oscillations, and localized structures. While implicit neural representations (INRs) have shown promise for continuous function modeling, capturing such high-frequency behavior remains a challenge-especially when modeling multiple solution fields with a shared network. Prior work addressing spectral bias in INRs has primarily focused on single-instance settings, limiting scalability and generalization. In this work, we propose Global Fourier Modulation (GFM), a novel modulation technique that injects high-frequency information at each layer of the INR through Fourier-based reparameterization. This enables compact and accurate representation of multiple solution fields using low-dimensional latent vectors. Building upon GFM, we introduce PDEfuncta, a meta-learning framework designed to learn multi-modal solution fields and support generalization to new tasks. Through empirical studies on diverse scientific problems, we demonstrate that our method not only improves representational quality but also shows potential for forward and inverse inference tasks without the need for retraining.