π€ AI Summary
Multi-Agent Debate (MAD) frameworks suffer from high computational overhead and a tendency to erroneously overturn correct single-agent answers. To address this, we propose iMADβan intelligent debate triggering framework that retains the efficiency of single-agent reasoning while invoking multi-agent debate only when self-critical signals indicate low answer reliability. Our key contribution is the first introduction of a 41-dimensional interpretable linguistic feature space to represent self-critical responses, coupled with a lightweight classifier and a purpose-designed FocusCal loss function, enabling robust, fine-tuning-free debate initiation decisions. Evaluated across six (vision) question-answering benchmarks, iMAD reduces average token consumption by 86%, with peak savings of 92%, while improving accuracy by up to 13.5%. This demonstrates a significant advance in jointly optimizing inference quality and computational efficiency.
π Abstract
Large Language Model (LLM) agent systems have advanced rapidly, driven by their strong generalization in zero-shot settings. To further enhance reasoning and accuracy on complex tasks, Multi-Agent Debate (MAD) has emerged as a promising framework that engages multiple LLM agents in structured debates to encourage diverse reasoning. However, triggering MAD for every query is inefficient, as it incurs substantial computational (token) cost and may even degrade accuracy by overturning correct single-agent answers. To address these limitations, we propose intelligent Multi-Agent Debate (iMAD), a token-efficient framework that selectively triggers MAD only when it is likely to be beneficial (i.e., correcting an initially wrong answer). To achieve this goal, iMAD learns generalizable model behaviors to make accurate debate decisions. Specifically, iMAD first prompts a single agent to produce a structured self-critique response, from which we extract 41 interpretable linguistic and semantic features capturing hesitation cues. Then, iMAD uses a lightweight debate-decision classifier, trained using our proposed FocusCal loss, to determine whether to trigger MAD, enabling robust debate decisions without test dataset-specific tuning. Through extensive experiments using six (visual) question answering datasets against five competitive baselines, we have shown that iMAD significantly reduces token usage (by up to 92%) while also improving final answer accuracy (by up to 13.5%).