InEx: Hallucination Mitigation via Introspection and Cross-Modal Multi-Agent Collaboration

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit hallucination in multimodal tasks, undermining the reliability of multimodal large language models (MLLMs); existing mitigation approaches often rely on manual intervention or fail to fully harness agent autonomy. This paper proposes InEx—a training-free, multi-agent collaborative framework for hallucination mitigation. InEx triggers introspective reasoning via entropy-driven uncertainty estimation and enables cross-modal verification through coordinated editing and self-reflection agents, achieving autonomous, iterative decision refinement. It integrates self-reflective reasoning, uncertainty quantification, and cross-modal interaction without requiring model fine-tuning. Evaluated on both general and hallucination-specific benchmarks, InEx outperforms state-of-the-art methods by 4–27%, demonstrating superior robustness and generalization.

Technology Category

Application Category

📝 Abstract
Hallucination remains a critical challenge in large language models (LLMs), hindering the development of reliable multimodal LLMs (MLLMs). Existing solutions often rely on human intervention or underutilize the agent's ability to autonomously mitigate hallucination. To address these limitations, we draw inspiration from how humans make reliable decisions in the real world. They begin with introspective reasoning to reduce uncertainty and form an initial judgment, then rely on external verification from diverse perspectives to reach a final decision. Motivated by this cognitive paradigm, we propose InEx, a training-free, multi-agent framework designed to autonomously mitigate hallucination. InEx introduces internal introspective reasoning, guided by entropy-based uncertainty estimation, to improve the reliability of the decision agent's reasoning process. The agent first generates a response, which is then iteratively verified and refined through external cross-modal multi-agent collaboration with the editing agent and self-reflection agents, further enhancing reliability and mitigating hallucination. Extensive experiments show that InEx consistently outperforms existing methods, achieving 4%-27% gains on general and hallucination benchmarks, and demonstrating strong robustness.
Problem

Research questions and friction points this paper is trying to address.

Mitigates hallucination in multimodal language models
Enhances reliability through introspective reasoning
Uses multi-agent collaboration for verification and refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free multi-agent framework for hallucination mitigation
Introspective reasoning with entropy-based uncertainty estimation
Cross-modal multi-agent collaboration for iterative verification
🔎 Similar Papers
No similar papers found.
Z
Zhongyu Yang
Xi’an Jiyun Technology Co., Ltd., Xi’an, China
Yingfang Yuan
Yingfang Yuan
Heriot-Watt University
Inter/Multi-disciplinary AIDeep LearningGraph Neural NetworkAgent
X
Xuanming Jiang
Xi’an Jiyun Technology Co., Ltd., Xi’an, China
B
Baoyi An
Xi’an Jiyun Technology Co., Ltd., Xi’an, China
W
Wei Pang
BCML, Heriot-Watt University, Edinburgh, UK