🤖 AI Summary
This work addresses efficient optimization in function spaces: minimizing a known functional whose output is generated by an unknown operator (e.g., a PDE solver), where functional evaluations are inexpensive but operator queries (e.g., high-fidelity simulations) are prohibitively costly. To tackle this, we introduce the first Thompson sampling framework for infinite-dimensional function spaces—termed “sample-then-optimize”: it leverages pretrained neural operators as approximate samples from an infinite-dimensional Gaussian process, circumventing explicit uncertainty quantification. We further establish the first convergence theory for Thompson sampling in function spaces. Experiments on PDE-constrained and nonlinear-operator-driven functional optimization demonstrate substantial improvements in sample efficiency, consistently outperforming state-of-the-art baselines.
📝 Abstract
We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that functional evaluations are inexpensive, while queries to the operator (such as running a high-fidelity simulator) are costly. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process. We provide novel theoretical convergence guarantees, based on Gaussian processes in the infinite-dimensional setting, under minimal assumptions. We benchmark our method against existing baselines on functional optimization tasks involving partial differential equations and other nonlinear operator-driven phenomena, demonstrating improved sample efficiency and competitive performance.