Improving LLM-based Global Optimization with Search Space Partitioning

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit sparse and inefficient sampling in high-dimensional expensive black-box optimization due to the absence of domain-specific priors. Method: This paper proposes a prior-free, locally focused optimization framework. It partitions the search space into evaluable “meta-arms” and employs a bandit mechanism to dynamically select high-potential subregions; an LLM then acts as a conditional candidate generator, efficiently producing high-quality evaluation points within the selected subregion. The method integrates adaptive spatial partitioning, bandit-inspired scoring, and gradient-free optimization—requiring no explicit domain knowledge. Contribution/Results: On standard benchmarks, the approach consistently outperforms global LLM-based sampling and matches or exceeds the performance of state-of-the-art Bayesian optimization and trust-region methods, demonstrating robustness and efficacy without domain priors.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently emerged as effective surrogate models and candidate generators within global optimization frameworks for expensive blackbox functions. Despite promising results, LLM-based methods often struggle in high-dimensional search spaces or when lacking domain-specific priors, leading to sparse or uninformative suggestions. To overcome these limitations, we propose HOLLM, a novel global optimization algorithm that enhances LLM-driven sampling by partitioning the search space into promising subregions. Each subregion acts as a ``meta-arm'' selected via a bandit-inspired scoring mechanism that effectively balances exploration and exploitation. Within each selected subregion, an LLM then proposes high-quality candidate points, without any explicit domain knowledge. Empirical evaluation on standard optimization benchmarks shows that HOLLM consistently matches or surpasses leading Bayesian optimization and trust-region methods, while substantially outperforming global LLM-based sampling strategies.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM-based global optimization in high-dimensional spaces
Enhancing LLM-driven sampling via search space partitioning
Overcoming sparse suggestions in blackbox function optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partitions search space into promising subregions
Uses bandit-inspired scoring for subregion selection
LLM proposes candidates without domain knowledge
🔎 Similar Papers
No similar papers found.