GPTOpt: Towards Efficient LLM-Based Black-Box Optimization

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of Bayesian optimization (BO) in black-box optimization—including heavy reliance on manual hyperparameter tuning, poor generalization across tasks, and low sample efficiency—this paper proposes the first end-to-end, derivative-free optimization framework powered by large language models (LLMs). Methodologically, it (1) constructs a synthetic dataset of diverse BO optimization trajectories covering broad parameter configurations; (2) applies instruction fine-tuning to enable LLMs to implicitly learn global optimization strategies—not merely fit specific hyperparameters; and (3) integrates continuous-space sampling with in-context learning to achieve zero-shot cross-task transfer. Evaluated on multiple standard black-box benchmarks, the method significantly outperforms classical BO with fewer function queries, demonstrating both superior sample efficiency and strong generalization. This work provides the first empirical validation of LLMs as universal optimizers.

Technology Category

Application Category

📝 Abstract
Global optimization of expensive, derivative-free black-box functions demands extreme sample efficiency. Classical methods such as Bayesian Optimization (BO) can be effective, but they often require careful parameter tuning to each application domain. At the same time, Large Language Models (LLMs) have shown broad capabilities, yet state-of-the-art models remain limited in solving continuous black-box optimization tasks. We introduce GPTOpt, an LLM-based optimization method that equips LLMs with continuous black-box optimization capabilities. By fine-tuning large language models on extensive synthetic datasets derived from diverse BO parameterizations, GPTOpt leverages LLM pre-training to generalize across optimization tasks. On a variety of black-box optimization benchmarks, GPTOpt surpasses traditional optimizers, highlighting the capacity of LLMs for advanced numerical reasoning and introducing a flexible framework for global optimization without parameter tuning.
Problem

Research questions and friction points this paper is trying to address.

Optimizing expensive derivative-free black-box functions with extreme sample efficiency
Overcoming limitations of classical methods requiring careful parameter tuning
Enabling LLMs to solve continuous black-box optimization tasks effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes LLMs on synthetic BO datasets
Enables continuous black-box optimization capabilities
Eliminates parameter tuning for global optimization
🔎 Similar Papers
No similar papers found.