Ground What You See: Hallucination-Resistant MLLMs via Caption Feedback, Diversity-Aware Sampling, and Conflict Regularization

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of multimodal large language models (MLLMs) to hallucination during reinforcement learning (RL) training, identifying three root causes: overreliance on chain-of-thought visual reasoning, insufficient exploration diversity, and destructive interference among training samples. To mitigate these issues, the study proposes a three-stage joint optimization framework. First, quality-reward-guided visual caption feedback precisely anchors initial information. Second, a diversity-aware sampling strategy based on reward distribution variance enhances exploration. Third, sample interference is alleviated through neural tangent kernel (NTK) similarity-based grouping combined with an InfoNCE loss. Experimental results demonstrate that this approach significantly reduces hallucination rates while improving reasoning accuracy.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse tasks, their practical deployment is severely hindered by hallucination issues, which become particularly acute during Reinforcement Learning (RL) optimization. This paper systematically analyzes the root causes of hallucinations in MLLMs under RL training, identifying three critical factors: (1) an over-reliance on chained visual reasoning, where inaccurate initial descriptions or redundant information anchor subsequent inferences to incorrect premises; (2) insufficient exploration diversity during policy optimization, leading the model to generate overly confident but erroneous outputs; and (3) destructive conflicts between training samples, where Neural Tangent Kernel (NTK) similarity causes false associations and unstable parameter updates. To address these challenges, we propose a comprehensive framework comprising three core modules. First, we enhance visual localization by introducing dedicated planning and captioning stages before the reasoning phase, employing a quality-based caption reward to ensure accurate initial anchoring. Second, to improve exploration, we categorize samples based on the mean and variance of their reward distributions, prioritizing samples with high variance to focus the model on diverse and informative data. Finally, to mitigate sample interference, we regulate NTK similarity by grouping sample pairs and applying an InfoNCE loss to push overly similar pairs apart and pull dissimilar ones closer, thereby guiding gradient interactions toward a balanced range. Experimental results demonstrate that our proposed method significantly reduces hallucination rates and effectively enhances the inference accuracy of MLLMs.
Problem

Research questions and friction points this paper is trying to address.

hallucination
Multimodal Large Language Models
Reinforcement Learning
Neural Tangent Kernel
exploration diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination resistance
diversity-aware sampling
conflict regularization
caption feedback
Neural Tangent Kernel