OPTAGENT: Optimizing Multi-Agent LLM Interactions Through Verbal Reinforcement Learning for Enhanced Reasoning

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-agent collaborative systems suffer from two key limitations: (1) rigid, pre-specified architectures—or reliance on majority voting/round-table debate—that suppress minority correct opinions; and (2) graph-based modeling that optimizes only agent-level performance while neglecting interaction quality. This paper proposes a dynamic collaboration mechanism grounded in *verbal reinforcement learning*, the first to explicitly model communication quality—e.g., coherence and robustness—as a learnable optimization objective. Leveraging multi-agent reinforcement learning with adaptive, learnable graph structures, our method enables self-organizing and evolving debate processes. It integrates carefully designed action spaces, fine-grained reward feedback, and explicit interaction-quality evaluation. Empirically, our approach achieves significant improvements over both single-agent baselines and state-of-the-art multi-agent frameworks across diverse tasks—including mathematical reasoning, scientific question answering, creative writing, and numerical ranking—demonstrating that high-fidelity inter-agent interaction is critical for enhancing collective reasoning capability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable reasoning capabilities in mathematical and scientific tasks. To enhance complex reasoning, multi-agent systems have been proposed to harness the collective intelligence of LLM agents. However, existing collaboration structures are either predefined or rely on majority voting or round-table debates, which can suppress correct but less dominant agent contributions. Recent approaches model multi-agent systems as graph networks but optimize purely for agent performance, neglecting the quality of interactions. We hypothesize that effective agent communication is crucial for multi-agent reasoning and that debating quality plays a significant role. To address this, we propose $ours$, a multi-agent verbal reinforcement learning algorithm that dynamically constructs and refines multi-agent collaboration structures. Our method defines action spaces and a feedback mechanism that evaluates communication robustness and coherence throughout the debate. The final decision is achieved through a majority vote over all the agents. We assess $ours$ on various reasoning tasks, including mathematical reasoning, creative writing, scientific reasoning, and numerical sorting. Results demonstrate that our approach significantly outperforms single-agent prompting methods and state-of-the-art multi-agent frameworks on diverse tasks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multi-agent collaboration structures for enhanced reasoning
Improving communication quality in multi-agent LLM interactions
Addressing suppression of correct minority opinions in debates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses verbal reinforcement learning for multi-agent optimization
Dynamically constructs and refines collaboration structures
Evaluates communication robustness and coherence throughout debates
🔎 Similar Papers
No similar papers found.
Zhenyu Bi
Zhenyu Bi
Ph.D. Student, Virginia Tech
Natural Language ProcessingInformation Retrieval
M
Meng Lu
Virginia Tech
Y
Yang Li
College of William and Mary
S
Swastik Roy
Amazon Alexa AI
W
Weijie Guan
Virginia Tech
M
Morteza Ziyadi
Amazon Alexa AI
X
Xuan Wang
Virginia Tech