Thompson Sampling in Function Spaces via Neural Operators

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses efficient optimization in function spaces: minimizing a known functional whose output is generated by an unknown operator (e.g., a PDE solver), where functional evaluations are inexpensive but operator queries (e.g., high-fidelity simulations) are prohibitively costly. To tackle this, we introduce the first Thompson sampling framework for infinite-dimensional function spaces—termed “sample-then-optimize”: it leverages pretrained neural operators as approximate samples from an infinite-dimensional Gaussian process, circumventing explicit uncertainty quantification. We further establish the first convergence theory for Thompson sampling in function spaces. Experiments on PDE-constrained and nonlinear-operator-driven functional optimization demonstrate substantial improvements in sample efficiency, consistently outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that functional evaluations are inexpensive, while queries to the operator (such as running a high-fidelity simulator) are costly. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process. We provide novel theoretical convergence guarantees, based on Gaussian processes in the infinite-dimensional setting, under minimal assumptions. We benchmark our method against existing baselines on functional optimization tasks involving partial differential equations and other nonlinear operator-driven phenomena, demonstrating improved sample efficiency and competitive performance.
Problem

Research questions and friction points this paper is trying to address.

Extends Thompson sampling to functional optimization problems
Optimizes expensive operator queries using neural surrogates
Provides convergence guarantees for infinite-dimensional Gaussian processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Thompson sampling to functional spaces
Uses neural operator surrogates for optimization
Avoids explicit uncertainty quantification via Gaussian processes
🔎 Similar Papers
2024-09-20World Scientific Annual Review of Artificial IntelligenceCitations: 1
R
Rafael Oliveira
CSIRO's Data61, Sydney, Australia
X
Xuesong Wang
CSIRO's Data61, Sydney, Australia
K
Kian Ming A. Chai
DSO National Laboratories, Singapore
Edwin V. Bonilla
Edwin V. Bonilla
Principal Research Scientist, CSIRO's Data61
Machine LearningBayesian StatisticsProbabilistic InferenceArtificial IntelligenceGaussian Processes