🤖 AI Summary
Bayesian optimization (BO) in large discrete spaces suffers from expensive acquisition function optimization due to the absence of gradients. Method: We propose a scalable Thompson sampling–based approach that bypasses explicit acquisition function optimization and instead directly models the probability that a candidate solution achieves the maximum reward. Contribution/Results: Our key innovations include (i) the first integration of large language model (LLM) prompt priors into the Thompson sampling framework; (ii) a novel variational regret bound; and (iii) a refined posterior maximization-probability adaptation mechanism. By conditioning the LLM on task-specific prompts and incorporating online fine-tuning, we enable progressive updates and efficient sampling of the optimal candidate probability distribution. Empirical evaluation on FAQ optimization, protein design, and quantum circuit generation demonstrates significantly improved sample efficiency while maintaining near-constant computational overhead.
📝 Abstract
Bayesian optimization in large unstructured discrete spaces is often hindered by the computational cost of maximizing acquisition functions due to the absence of gradients. We propose a scalable alternative based on Thompson sampling that eliminates the need for acquisition function maximization by directly parameterizing the probability that a candidate yields the maximum reward. Our approach, Thompson Sampling via Fine-Tuning (ToSFiT) leverages the prior knowledge embedded in prompt-conditioned large language models, and incrementally adapts them toward the posterior. Theoretically, we derive a novel regret bound for a variational formulation of Thompson Sampling that matches the strong guarantees of its standard counterpart. Our analysis reveals the critical role of careful adaptation to the posterior probability of maximality--a principle that underpins our ToSFiT algorithm. Empirically, we validate our method on three diverse tasks: FAQ response refinement, thermally stable protein search, and quantum circuit design. We demonstrate that online fine-tuning significantly improves sample efficiency, with negligible impact on computational efficiency.