🤖 AI Summary
This work addresses the challenge of hallucination in large vision-language models (LVLMs), where generated text often contradicts visual input, and existing mitigation strategies frequently degrade the model’s natural generation behavior. To resolve this, the authors propose MESA, a plug-and-play, non-intrusive framework that decouples hallucination suppression from perturbations to generative behavior for the first time. MESA achieves precise intervention in the latent space through directional and selective modulation, effectively suppressing hallucinatory responses while preserving the original token distribution. Requiring no model fine-tuning, MESA demonstrates consistent effectiveness across multiple LVLM families, significantly reducing hallucination rates on diverse generative and discriminative benchmarks while better maintaining output length and linguistic fidelity.
📝 Abstract
Large Vision-Language Models (LVLMs) have achieved remarkable success across cross-modal tasks but remain hindered by hallucinations, producing textual outputs inconsistent with visual content. Existing methods mitigate hallucinations but often alter generation behavior, resulting in shorter outputs and shifted token distributions, especially in latent space steering approaches. We identify that this issue stems from entangled steering signals, where suppressing hallucinations inadvertently disrupts the model's intrinsic generation behavior. To address this, we propose MESA, an effective plug-and-play framework that performs controlled and selective latent intervention for hallucination mitigation. Specifically, MESA targets hallucination-relevant responses while preserving the model's original token distribution, enabling effective hallucination reduction without compromising generation behavior. Extensive experiments across diverse generative and discriminative benchmarks demonstrate that MESA consistently reduces hallucinations while better preserving generation behavior, outperforming prior methods across multiple LVLM families.