Multi-Objective Hierarchical Optimization with Large Language Models

πŸ“… 2026-01-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses key challenges faced by large language models (LLMs) in multi-objective optimization, including difficulties in numerical reasoning, balancing exploration and exploitation, and reconciling conflicting objectives. The authors propose a hierarchical optimization framework that uniquely integrates an LLM as both a surrogate model and a candidate sampler. By adaptively partitioning the input space into disjoint hyper-rectangular regions and employing a composite scoring function to rank these regions, the method directs the LLM to generate solutions exclusively within high-potential subspaces, thereby circumventing the need for global modeling. Under standard regularity assumptions, the approach guarantees convergence of the solution set to the true Pareto front in Hausdorff distance. Empirical evaluations demonstrate that the method significantly outperforms existing LLM-based optimizers on both synthetic and real-world benchmarks, achieving performance on par with state-of-the-art evolutionary and Bayesian optimization algorithms.

Technology Category

Application Category

πŸ“ Abstract
Despite their widespread adoption in various domains, especially due to their powerful reasoning capabilities, Large Language Models (LLMs) are not the off-the-shelf choice to drive multi-objective optimization yet. Conventional strategies rank high in benchmarks due to their intrinsic capabilities to handle numerical inputs and careful modelling choices that balance exploration and Pareto-front exploitation, as well as handle multiple (conflicting) objectives. In this paper, we close this gap by leveraging LLMs as surrogate models and candidate samplers inside a structured hierarchical search strategy. By adaptively partitioning the input space into disjoint hyperrectangular regions and ranking them with a composite score function, we restrict the generative process of the LLM to specific, high-potential sub-spaces, hence making the problem easier to solve as the LLM doesn't have to reason about the global structure of the problem, but only locally instead. We show that under standard regularity assumptions, our algorithm generates candidate solutions that converge to the true Pareto set in Hausdorff distance. Empirically, it consistently outperforms the global LLM-based multi-objective optimizer and is on par with standard evolutionary and Bayesian optimization algorithm on synthetic and real-world benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Multi-Objective Optimization
Large Language Models
Pareto Front
Hierarchical Optimization
Surrogate Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Multi-Objective Optimization
Hierarchical Search
Pareto Front
Surrogate Modeling
πŸ”Ž Similar Papers