🤖 AI Summary
Current evaluation of medical multi-agent systems overemphasizes final-answer accuracy while neglecting the explainability and auditability of collaborative reasoning processes, thereby undermining clinical trustworthiness.
Method: We propose the first systematic taxonomy of collaborative failure modes—consensus deficits, suppression of minority correct opinions, inefficient deliberation, and information loss—and conduct a large-scale empirical study (N=3600) across six medical datasets and six state-of-the-art multi-agent architectures using a mixed-methods framework integrating qualitative analysis and quantitative auditing.
Contribution/Results: We demonstrate that high answer accuracy does not guarantee sound diagnostic reasoning; optimizing outputs alone fails to ensure logical validity. Our findings establish transparent, auditable collaboration as a necessary prerequisite for clinically trustworthy medical AI. This work provides both theoretical grounding and practical guidance for a paradigm shift in multi-agent system evaluation—from outcome-centric to process-aware assessment.
📝 Abstract
While large language model (LLM)-based multi-agent systems show promise in simulating medical consultations, their evaluation is often confined to final-answer accuracy. This practice treats their internal collaborative processes as opaque "black boxes" and overlooks a critical question: is a diagnostic conclusion reached through a sound and verifiable reasoning pathway? The inscrutable nature of these systems poses a significant risk in high-stakes medical applications, potentially leading to flawed or untrustworthy conclusions. To address this, we conduct a large-scale empirical study of 3,600 cases from six medical datasets and six representative multi-agent frameworks. Through a rigorous, mixed-methods approach combining qualitative analysis with quantitative auditing, we develop a comprehensive taxonomy of collaborative failure modes. Our quantitative audit reveals four dominant failure patterns: flawed consensus driven by shared model deficiencies, suppression of correct minority opinions, ineffective discussion dynamics, and critical information loss during synthesis. This study demonstrates that high accuracy alone is an insufficient measure of clinical or public trust. It highlights the urgent need for transparent and auditable reasoning processes, a cornerstone for the responsible development and deployment of medical AI.