LILO: Bayesian Optimization with Interactive Natural Language Feedback

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of transforming complex, subjective natural language feedback into quantifiable optimization objectives to enhance the practicality and efficiency of Bayesian optimization (BO) in human-in-the-loop settings. We propose LLM-BO: a framework that leverages large language models (LLMs) to uniformly parse heterogeneous textual feedback, integrate user priors, and autonomously generate scalar utility estimates; it couples these estimates with Gaussian process modeling and sequential decision-making to preserve BO’s sample efficiency and probabilistic uncertainty quantification. Unlike preference-based BO methods, LLM-BO eliminates reliance on structured feedback or hand-crafted kernels, enabling end-to-end, natural language–driven optimization. Experiments demonstrate that LLM-BO significantly outperforms standard BO and LLM-only optimizers under scarce-feedback conditions, while exhibiting strong generalization and efficient human-AI collaboration across diverse domains.

Technology Category

Application Category

📝 Abstract
For many real-world applications, feedback is essential in translating complex, nuanced, or subjective goals into quantifiable optimization objectives. We propose a language-in-the-loop framework that uses a large language model (LLM) to convert unstructured feedback in the form of natural language into scalar utilities to conduct BO over a numeric search space. Unlike preferential BO, which only accepts restricted feedback formats and requires customized models for each domain-specific problem, our approach leverages LLMs to turn varied types of textual feedback into consistent utility signals and to easily include flexible user priors without manual kernel design. At the same time, our method maintains the sample efficiency and principled uncertainty quantification of BO. We show that this hybrid method not only provides a more natural interface to the decision maker but also outperforms conventional BO baselines and LLM-only optimizers, particularly in feedback-limited regimes.
Problem

Research questions and friction points this paper is trying to address.

Converts natural language feedback into optimization objectives using LLMs
Enables flexible user priors without manual kernel design requirements
Maintains Bayesian Optimization efficiency while outperforming conventional baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM converts natural language feedback into utilities
Leverages flexible user priors without manual kernel design
Maintains sample efficiency and uncertainty quantification of BO
🔎 Similar Papers
No similar papers found.