๐ค AI Summary
This work addresses the susceptibility of large vision-language models to object hallucination under weak structural supervision, a phenomenon often driven by the visual encoderโs bias toward local textures. The study is the first to reveal the link between structural deficiency bias and hallucination, and introduces Structure-Disrupted Contrastive Decoding (SDCD)โa training-free decoding mechanism that calibrates the output distribution by incorporating structure-disrupted contrastive views. Without modifying the model architecture, SDCD effectively suppresses texture-driven hallucinations while simultaneously enhancing multimodal understanding and reasoning capabilities. Experimental results demonstrate significant reductions in object hallucination rates across multiple benchmarks, underscoring the methodโs efficacy in aligning model outputs with structurally coherent visual semantics.
๐ Abstract
Large Vision-Language Models (LVLMs) demonstrate significant progress in multimodal understanding and reasoning, yet object hallucination remains a critical challenge. While existing research focuses on mitigating language priors or high-level statistical biases, they often overlook the internal complexities of the visual encoding process. We identify that visual statistical bias, arising from the inherent Bag-of-Patches behavior of Vision Encoders under weak structural supervision, acts as a contributing factor of object hallucinations. Under this bias, models prioritize local texture features within individual patches over holistic geometric structures. This tendency may induce spurious visual confidence and result in hallucinations. To address this, we introduce a training-free algorithm called Structure-Disrupted Contrastive Decoding (SDCD), which performs contrastive calibration of the output distribution by introducing a shuffled structure-disrupted view. By penalizing tokens that maintain high confidence under this structure-less view, SDCD effectively suppresses the texture-driven bias. Experimental results demonstrate that SDCD significantly mitigates hallucinations across multiple benchmarks and enhances the overall multimodal capabilities of LVLMs.