Unlocking the Power of Multi-Agent LLM for Reasoning: From Lazy Agents to Deliberation

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the “lazy agent” problem—where low-contribution agents degrade collaborative reasoning—and stability issues (e.g., directional drift and noise accumulation) in multi-round interactions of multi-agent LLM systems, this paper proposes a cooperative reasoning framework grounded in causal influence measurement and verifiable reward mechanisms. Methodologically: (1) a causal influence metric identifies underperforming agents; (2) a sparse, verifiable reward signal—supporting reflection and reset—guides agents to actively correct errors and restart reasoning; (3) multi-agent dialogue modeling and dynamic collaboration control are jointly optimized within a reinforcement learning framework. Experiments demonstrate that the framework effectively suppresses lazy behavior, yielding an average 12.7% accuracy gain on complex tasks including mathematical reasoning and symbolic logic. It is the first to achieve stable, traceable, and self-correcting cooperative reasoning in multi-agent LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) trained with reinforcement learning and verifiable rewards have achieved strong results on complex reasoning tasks. Recent work extends this paradigm to a multi-agent setting, where a meta-thinking agent proposes plans and monitors progress while a reasoning agent executes subtasks through sequential conversational turns. Despite promising performance, we identify a critical limitation: lazy agent behavior, in which one agent dominates while the other contributes little, undermining collaboration and collapsing the setup to an ineffective single agent. In this paper, we first provide a theoretical analysis showing why lazy behavior naturally arises in multi-agent reasoning. We then introduce a stable and efficient method for measuring causal influence, helping mitigate this issue. Finally, as collaboration intensifies, the reasoning agent risks getting lost in multi-turn interactions and trapped by previous noisy responses. To counter this, we propose a verifiable reward mechanism that encourages deliberation by allowing the reasoning agent to discard noisy outputs, consolidate instructions, and restart its reasoning process when necessary. Extensive experiments demonstrate that our framework alleviates lazy agent behavior and unlocks the full potential of multi-agent framework for complex reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Multi-agent LLMs exhibit lazy behavior undermining collaboration
Causal influence measurement mitigates agent dominance in reasoning
Verifiable rewards prevent reasoning agents from getting trapped
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent LLM framework for reasoning tasks
Causal influence measurement to mitigate lazy behavior
Verifiable reward mechanism enabling deliberation and restart
🔎 Similar Papers
No similar papers found.