Let's Have a Conversation: Designing and Evaluating LLM Agents for Interactive Optimization

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional one-shot optimization approaches struggle to accurately capture decision-makers’ true preferences regarding objectives, constraints, and trade-offs, and often lack effective human-in-the-loop interaction mechanisms. This work proposes an interactive optimization agent framework powered by large language models, which simulates multi-stakeholder dialogues through role-playing. By integrating internal utility function modeling, domain-specific prompt engineering, and structured tool invocation, the framework dynamically generates, explains, and refines solutions. The study introduces a scalable and reproducible evaluation methodology, generating thousands of dialogue rounds in a school timetabling case study. Results demonstrate that the proposed customized agent converges to significantly superior solutions in fewer interaction rounds compared to general-purpose chatbots.
📝 Abstract
Optimization is as much about modeling the right problem as solving it. Identifying the right objectives, constraints, and trade-offs demands extensive interaction between researchers and stakeholders. Large language models can empower decision-makers with optimization capabilities through interactive optimization agents that can propose, interpret and refine solutions. However, it is fundamentally harder to evaluate a conversation-based interaction than traditional one-shot approaches. This paper proposes a scalable and replicable methodology for evaluating optimization agents through conversations. We build LLM-powered decision agents that role-play diverse stakeholders, each governed by an internal utility function but communicating like a real decision-maker. We generate thousands of conversations in a school scheduling case study. Results show that one-shot evaluation is severely limiting: the same optimization agent converges to much higher-quality solutions through conversations. Then, this paper uses this methodology to demonstrate that tailored optimization agents, endowed with domain-specific prompts and structured tools, can lead to significant improvements in solution quality in fewer interactions, as compared to general-purpose chatbots. These findings provide evidence of the benefits of emerging solutions at the AI-optimization interface to expand the reach of optimization technologies in practice. They also uncover the impact of operations research expertise to facilitate interactive deployments through the design of effective and reliable optimization agents.
Problem

Research questions and friction points this paper is trying to address.

interactive optimization
LLM agents
conversational evaluation
optimization agents
decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

interactive optimization
LLM agents
conversational evaluation
domain-specific prompting
utility-driven simulation
🔎 Similar Papers
No similar papers found.