Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning

📅 2024-02-24
🏛️ Neural Information Processing Systems
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators suffer from low data efficiency due to heavy reliance on costly PDE numerical solutions for supervised training. To address this, we propose a physics-driven unsupervised pretraining framework: first, we design a reconstruction-based proxy task grounded in PDE physical constraints, enabling pretraining on unlabeled parametric PDE data; second, we introduce a training-free, fine-tuning-free similarity-guided in-context learning mechanism that dynamically infers operator weights via physical similarity between query and support samples. This work is the first to jointly integrate reconstruction-based unsupervised pretraining and similarity-aware in-context learning for operator learning. We validate our approach on Fourier Neural Operator and DeepONet architectures: using only 1%–5% labeled data, it matches or surpasses fully supervised baselines, while demonstrating significantly stronger cross-equation generalization than vision-inspired transfer learning methods.

Technology Category

Application Category

📝 Abstract
Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insights for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning. To reduce the need for training data with heavy simulation costs, we mine unlabeled PDE data without simulated solutions, and we pretrain neural operators with physics-inspired reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging a similarity-based method that learns in-context examples, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models. We provide our code at https://github.com/delta-lab-ai/data_efficient_nopt.
Problem

Research questions and friction points this paper is trying to address.

Reducing PDE data dependency via unsupervised pretraining
Enhancing out-of-distribution performance with in-context learning
Avoiding expensive numerical PDE simulations for operator learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised pretraining for PDE operator learning
Physics-inspired reconstruction-based proxy tasks
Similarity-based in-context learning method
🔎 Similar Papers
No similar papers found.
Wuyang Chen
Wuyang Chen
Assistant Professor, CS@Simon Fraser University
Scientific Machine LearningComputer VisionLarge Language ModelsReasoning
J
Jialin Song
Department of Statistics, University of California, Berkeley; International Computer Science Institute; Lawrence Berkeley National Laboratory
Pu Ren
Pu Ren
Lawrence Berkeley National Lab, Northeastern University
Machine LearningAI for Science
Shashank Subramanian
Shashank Subramanian
Lawrence Berkeley National Laboratory
scientific machine learninglarge-scale optimizationinverse problemshigh-performance computing
Dmitriy Morozov
Dmitriy Morozov
Lawrence Berkeley National Laboratory
M
Michael W. Mahoney
Department of Statistics, University of California, Berkeley; International Computer Science Institute; Lawrence Berkeley National Laboratory