Graph In-Context Operator Networks for Generalizable Spatiotemporal Prediction

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of traditional single-operator learning in spatiotemporal systems and its inability to leverage contextual examples for zero- or few-shot inference. To overcome these challenges, the authors propose GICON, a novel model that achieves geometric generalization through graph message passing and supports cardinality generalization via example-aware positional encoding, enabling inference of solution operators from contextual examples without updating model weights. As the first study to systematically compare contextual operator learning with classical approaches under identical training conditions, GICON establishes a unified framework integrating graph structure and contextual examples, demonstrating strong generalization across spatial domains and varying numbers of examples. Evaluated on air quality forecasting tasks in two regions of China, GICON significantly outperforms existing methods, delivering robust and accurate cross-domain spatiotemporal predictions from as few as a handful to over a hundred contextual examples.

Technology Category

Application Category

📝 Abstract
In-context operator learning enables neural networks to infer solution operators from contextual examples without weight updates. While prior work has demonstrated the effectiveness of this paradigm in leveraging vast datasets, a systematic comparison against single-operator learning using identical training data has been absent. We address this gap through controlled experiments comparing in-context operator learning against classical operator learning (single-operator models trained without contextual examples), under the same training steps and dataset. To enable this investigation on real-world spatiotemporal systems, we propose GICON (Graph In-Context Operator Network), combining graph message passing for geometric generalization with example-aware positional encoding for cardinality generalization. Experiments on air quality prediction across two Chinese regions show that in-context operator learning outperforms classical operator learning on complex tasks, generalizing across spatial domains and scaling robustly from few training examples to 100 at inference.
Problem

Research questions and friction points this paper is trying to address.

in-context operator learning
classical operator learning
spatiotemporal prediction
generalization
controlled comparison
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context operator learning
graph message passing
example-aware positional encoding
spatiotemporal prediction
geometric generalization
🔎 Similar Papers
No similar papers found.
C
Chenghan Wu
National University of Singapore
Z
Zongmin Yu
National University of Singapore
B
Boai Sun
National University of Singapore
Liu Yang
Liu Yang
Department of Mathematics, National University of Singapore
Deep LearningPartial Differential EquationGenerative ModelIn-Context Learning