🤖 AI Summary
Existing multi-agent collaborative systems suffer from two key limitations: (1) rigid, pre-specified architectures—or reliance on majority voting/round-table debate—that suppress minority correct opinions; and (2) graph-based modeling that optimizes only agent-level performance while neglecting interaction quality. This paper proposes a dynamic collaboration mechanism grounded in *verbal reinforcement learning*, the first to explicitly model communication quality—e.g., coherence and robustness—as a learnable optimization objective. Leveraging multi-agent reinforcement learning with adaptive, learnable graph structures, our method enables self-organizing and evolving debate processes. It integrates carefully designed action spaces, fine-grained reward feedback, and explicit interaction-quality evaluation. Empirically, our approach achieves significant improvements over both single-agent baselines and state-of-the-art multi-agent frameworks across diverse tasks—including mathematical reasoning, scientific question answering, creative writing, and numerical ranking—demonstrating that high-fidelity inter-agent interaction is critical for enhancing collective reasoning capability.
📝 Abstract
Large Language Models (LLMs) have shown remarkable reasoning capabilities in mathematical and scientific tasks. To enhance complex reasoning, multi-agent systems have been proposed to harness the collective intelligence of LLM agents. However, existing collaboration structures are either predefined or rely on majority voting or round-table debates, which can suppress correct but less dominant agent contributions. Recent approaches model multi-agent systems as graph networks but optimize purely for agent performance, neglecting the quality of interactions. We hypothesize that effective agent communication is crucial for multi-agent reasoning and that debating quality plays a significant role. To address this, we propose $ours$, a multi-agent verbal reinforcement learning algorithm that dynamically constructs and refines multi-agent collaboration structures. Our method defines action spaces and a feedback mechanism that evaluates communication robustness and coherence throughout the debate. The final decision is achieved through a majority vote over all the agents. We assess $ours$ on various reasoning tasks, including mathematical reasoning, creative writing, scientific reasoning, and numerical sorting. Results demonstrate that our approach significantly outperforms single-agent prompting methods and state-of-the-art multi-agent frameworks on diverse tasks.