🤖 AI Summary
Large language models (LLMs) exhibit sparse and inefficient sampling in high-dimensional expensive black-box optimization due to the absence of domain-specific priors. Method: This paper proposes a prior-free, locally focused optimization framework. It partitions the search space into evaluable “meta-arms” and employs a bandit mechanism to dynamically select high-potential subregions; an LLM then acts as a conditional candidate generator, efficiently producing high-quality evaluation points within the selected subregion. The method integrates adaptive spatial partitioning, bandit-inspired scoring, and gradient-free optimization—requiring no explicit domain knowledge. Contribution/Results: On standard benchmarks, the approach consistently outperforms global LLM-based sampling and matches or exceeds the performance of state-of-the-art Bayesian optimization and trust-region methods, demonstrating robustness and efficacy without domain priors.
📝 Abstract
Large Language Models (LLMs) have recently emerged as effective surrogate models and candidate generators within global optimization frameworks for expensive blackbox functions. Despite promising results, LLM-based methods often struggle in high-dimensional search spaces or when lacking domain-specific priors, leading to sparse or uninformative suggestions. To overcome these limitations, we propose HOLLM, a novel global optimization algorithm that enhances LLM-driven sampling by partitioning the search space into promising subregions. Each subregion acts as a ``meta-arm'' selected via a bandit-inspired scoring mechanism that effectively balances exploration and exploitation. Within each selected subregion, an LLM then proposes high-quality candidate points, without any explicit domain knowledge. Empirical evaluation on standard optimization benchmarks shows that HOLLM consistently matches or surpasses leading Bayesian optimization and trust-region methods, while substantially outperforming global LLM-based sampling strategies.