MedMMV: A Controllable Multimodal Multi-Agent Framework for Reliable and Verifiable Clinical Reasoning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) suffer from unstable early-stage evidence interpretation in medical reasoning, leading to hallucinations and divergent inference branches that compromise global conclusion consistency. To address this, we propose the first controllable multimodal multi-agent clinical reasoning framework. It models heterogeneous medical evidence via a structured evidence graph, enforces inference-path constraints through diverse short rollouts and hallucination-detection supervision, and introduces a joint uncertainty scoring mechanism to ensure auditability. The framework end-to-end integrates multi-agent collaboration, evidence graph construction, hallucination detection, and uncertainty quantification. Evaluated on six medical benchmarks, it achieves an average accuracy improvement of 12.7%. Blind expert evaluations by clinicians demonstrate significantly enhanced perceived reasoning fidelity, while information completeness remains preserved.

Technology Category

Application Category

📝 Abstract
Recent progress in multimodal large language models (MLLMs) has demonstrated promising performance on medical benchmarks and in preliminary trials as clinical assistants. Yet, our pilot audit of diagnostic cases uncovers a critical failure mode: instability in early evidence interpretation precedes hallucination, creating branching reasoning trajectories that cascade into globally inconsistent conclusions. This highlights the need for clinical reasoning agents that constrain stochasticity and hallucination while producing auditable decision flows. We introduce MedMMV, a controllable multimodal multi-agent framework for reliable and verifiable clinical reasoning. MedMMV stabilizes reasoning through diversified short rollouts, grounds intermediate steps in a structured evidence graph under the supervision of a Hallucination Detector, and aggregates candidate paths with a Combined Uncertainty scorer. On six medical benchmarks, MedMMV improves accuracy by up to 12.7% and, more critically, demonstrates superior reliability. Blind physician evaluations confirm that MedMMV substantially increases reasoning truthfulness without sacrificing informational content. By controlling instability through a verifiable, multi-agent process, our framework provides a robust path toward deploying trustworthy AI systems in high-stakes domains like clinical decision support.
Problem

Research questions and friction points this paper is trying to address.

Addresses instability in clinical reasoning from multimodal data
Reduces hallucination in medical diagnostic evidence interpretation
Ensures verifiable decision flows for trustworthy clinical AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diversified short rollouts to stabilize reasoning
Grounds steps in evidence graph with hallucination detection
Aggregates paths using combined uncertainty scoring
🔎 Similar Papers
No similar papers found.