🤖 AI Summary
To address imprecise visual tool selection (e.g., OCR, object detection, spatial reasoning) and underutilized inter-agent disagreement in multimodal reasoning, this paper proposes a controversy-driven dynamic tool invocation mechanism. It identifies expert visual tools required by detecting reasoning disagreements among agents and introduces a tool-aligned consensus scoring function to guide high-quality, multi-round debates, culminating in more robust answer aggregation. The method integrates a multi-agent debate framework, fine-grained visual tool invocation, and consensus-guided decision-making. Evaluated on A-OKVQA and MMMU benchmarks, it outperforms the strongest baseline by 3.4% and 2.4%, respectively, and achieves a 1.3% improvement on the M3D medical dataset. These results demonstrate that disagreement-aware tool scheduling significantly enhances both reasoning accuracy and debate quality.
📝 Abstract
Specialized visual tools can augment large language models or vision language models with expert knowledge (e.g., grounding, spatial reasoning, medical knowledge, etc.), but knowing which tools to call (and when to call them) can be challenging. We introduce DART, a multi-agent framework that uses disagreements between multiple debating visual agents to identify useful visual tools (e.g., object detection, OCR, spatial reasoning, etc.) that can resolve inter-agent disagreement. These tools allow for fruitful multi-agent discussion by introducing new information, and by providing tool-aligned agreement scores that highlight agents in agreement with expert tools, thereby facilitating discussion. We utilize an aggregator agent to select the best answer by providing the agent outputs and tool information. We test DART on four diverse benchmarks and show that our approach improves over multi-agent debate as well as over single agent tool-calling frameworks, beating the next-strongest baseline (multi-agent debate with a judge model) by 3.4% and 2.4% on A-OKVQA and MMMU respectively. We also find that DART adapts well to new tools in applied domains, with a 1.3% improvement on the M3D medical dataset over other strong tool-calling, single agent, and multi-agent baselines. Additionally, we measure text overlap across rounds to highlight the rich discussion in DART compared to existing multi-agent methods. Finally, we study the tool call distribution, finding that diverse tools are reliably used to help resolve disagreement.