Test-time Generalization for Physics through Neural Operator Splitting

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural operators exhibit limited generalization when test inputs lie outside the training distribution—such as under novel initial conditions, unknown PDE coefficients, or unseen physical laws—hindering zero-shot transfer. This work proposes a test-time generalization approach that dynamically splits and recombines multiple pre-trained neural operators from a dictionary to approximate unseen physical dynamics without any fine-tuning. Built upon the DISCO framework, the method integrates test-time operator composition search with PDE parameter inversion, significantly enhancing zero-shot generalization on tasks involving parameter extrapolation and novel combinations of physical phenomena. It accurately recovers underlying PDE parameters and, for the first time, achieves zero-shot physical generalization without updating model weights.

Technology Category

Application Category

📝 Abstract
Neural operators have shown promise in learning solution maps of partial differential equations (PDEs), but they often struggle to generalize when test inputs lie outside the training distribution, such as novel initial conditions, unseen PDE coefficients or unseen physics. Prior works address this limitation with large-scale multiple physics pretraining followed by fine-tuning, but this still requires examples from the new dynamics, falling short of true zero-shot generalization. In this work, we propose a method to enhance generalization at test time, i.e., without modifying pretrained weights. Building on DISCO, which provides a dictionary of neural operators trained across different dynamics, we introduce a neural operator splitting strategy that, at test time, searches over compositions of training operators to approximate unseen dynamics. On challenging out-of-distribution tasks including parameter extrapolation and novel combinations of physics phenomena, our approach achieves state-of-the-art zero-shot generalization results, while being able to recover the underlying PDE parameters. These results underscore test-time computation as a key avenue for building flexible, compositional, and generalizable neural operators.
Problem

Research questions and friction points this paper is trying to address.

test-time generalization
neural operators
zero-shot generalization
out-of-distribution
physics-informed learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural operator splitting
test-time generalization
zero-shot learning
physics-informed neural networks
out-of-distribution generalization
🔎 Similar Papers
No similar papers found.