iMAD: Intelligent Multi-Agent Debate for Efficient and Accurate LLM Inference

πŸ“… 2025-11-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multi-Agent Debate (MAD) frameworks suffer from high computational overhead and a tendency to erroneously overturn correct single-agent answers. To address this, we propose iMADβ€”an intelligent debate triggering framework that retains the efficiency of single-agent reasoning while invoking multi-agent debate only when self-critical signals indicate low answer reliability. Our key contribution is the first introduction of a 41-dimensional interpretable linguistic feature space to represent self-critical responses, coupled with a lightweight classifier and a purpose-designed FocusCal loss function, enabling robust, fine-tuning-free debate initiation decisions. Evaluated across six (vision) question-answering benchmarks, iMAD reduces average token consumption by 86%, with peak savings of 92%, while improving accuracy by up to 13.5%. This demonstrates a significant advance in jointly optimizing inference quality and computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM) agent systems have advanced rapidly, driven by their strong generalization in zero-shot settings. To further enhance reasoning and accuracy on complex tasks, Multi-Agent Debate (MAD) has emerged as a promising framework that engages multiple LLM agents in structured debates to encourage diverse reasoning. However, triggering MAD for every query is inefficient, as it incurs substantial computational (token) cost and may even degrade accuracy by overturning correct single-agent answers. To address these limitations, we propose intelligent Multi-Agent Debate (iMAD), a token-efficient framework that selectively triggers MAD only when it is likely to be beneficial (i.e., correcting an initially wrong answer). To achieve this goal, iMAD learns generalizable model behaviors to make accurate debate decisions. Specifically, iMAD first prompts a single agent to produce a structured self-critique response, from which we extract 41 interpretable linguistic and semantic features capturing hesitation cues. Then, iMAD uses a lightweight debate-decision classifier, trained using our proposed FocusCal loss, to determine whether to trigger MAD, enabling robust debate decisions without test dataset-specific tuning. Through extensive experiments using six (visual) question answering datasets against five competitive baselines, we have shown that iMAD significantly reduces token usage (by up to 92%) while also improving final answer accuracy (by up to 13.5%).
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs of multi-agent debates for LLM inference
Selectively triggering debates only when likely to correct wrong answers
Improving accuracy while minimizing token usage in complex tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selectively triggers multi-agent debate for efficiency
Extracts hesitation features from self-critique responses
Uses lightweight classifier with FocusCal loss for decisions
πŸ”Ž Similar Papers
No similar papers found.
W
Wei Fan
Department of Computer Science, Virginia Tech, Blacksburg, V A, USA
JinYi Yoon
JinYi Yoon
Virginia Tech
Edge AINetworked SystemsDistributed Systems
B
Bo Ji
Department of Computer Science, Virginia Tech, Blacksburg, V A, USA