In-Context Operator Learning on the Space of Probability Measures

πŸ“… 2026-01-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the problem of learning optimal transport maps between probability distributions during inference without gradient updates and using only a few in-context samples. It introduces the first in-context operator learning framework that directly predicts optimal transport maps over measure spaces, applicable to both nonparametric and parametric distribution settings. By integrating in-context learning, optimal transport theory, and the low-dimensional manifold hypothesis, the method employs an explicit neural architecture and provides theoretical guarantees on generalization error and excess risk. Experiments on synthetic transport tasks and generative modeling benchmarks demonstrate the approach’s effectiveness, achieving high-accuracy map prediction without fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
We introduce \emph{in-context operator learning on probability measure spaces} for optimal transport (OT). The goal is to learn a single solution operator that maps a pair of distributions to the OT map, using only few-shot samples from each distribution as a prompt and \emph{without} gradient updates at inference. We parameterize the solution operator and develop scaling-law theory in two regimes. In the \emph{nonparametric} setting, when tasks concentrate on a low-intrinsic-dimension manifold of source--target pairs, we establish generalization bounds that quantify how in-context accuracy scales with prompt size, intrinsic task dimension, and model capacity. In the \emph{parametric} setting (e.g., Gaussian families), we give an explicit architecture that recovers the exact OT map in context and provide finite-sample excess-risk bounds. Our numerical experiments on synthetic transports and generative-modeling benchmarks validate the framework.
Problem

Research questions and friction points this paper is trying to address.

in-context learning
operator learning
optimal transport
probability measures
few-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context operator learning
optimal transport
probability measure spaces
few-shot learning
generalization bounds
πŸ”Ž Similar Papers
F
Frank Cole
School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA
D
Dixi Wang
Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA
Y
Yineng Chen
Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA
Yulong Lu
Yulong Lu
Assistant Professor at University of Minnesota Twin Cities
Applied and Computational MathematicsProbabilityStatistics
Rongjie Lai
Rongjie Lai
Professor of Mathematics, Purdue University
Applied and Computational Mathematics