π€ AI Summary
Existing multimodal large language models (MLLMs) overly emphasize linguistic feedback during Direct Preference Optimization (DPO)-based alignment, neglecting visual context and thereby suffering severe visual hallucinations. This work proposes AdaViPβthe first adaptive preference optimization framework explicitly designed for vision-perception enhancement. First, it constructs vision-driven preference pairs by leveraging multiple vision foundation models to localize and mask salient image regions, generating fine-grained visual contrastive samples. Second, it introduces a dynamic dual-objective weighting mechanism that adaptively balances visual fidelity and linguistic consistency within the DPO loss. Evaluated on Object HalBench, AdaViP-7B reduces response-level and mention-level visual hallucinations by 93.7% and 96.4%, respectively, significantly outperforming state-of-the-art methods and achieving, for the first time, end-to-end multimodal preference alignment guided by explicit visual perception.
π Abstract
Preference alignment through Direct Preference Optimization (DPO) has demonstrated significant effectiveness in aligning multimodal large language models (MLLMs) with human preferences. However, existing methods focus primarily on language preferences while neglecting the critical visual context. In this paper, we propose an Adaptive Vision-enhanced Preference optimization (AdaViP) that addresses these limitations through two key innovations: (1) vision-based preference pair construction, which integrates multiple visual foundation models to strategically remove key visual elements from the image, enhancing MLLMs' sensitivity to visual details; and (2) adaptive preference optimization that dynamically balances vision- and language-based preferences for more accurate alignment. Extensive evaluations across different benchmarks demonstrate our effectiveness. Notably, our AdaViP-7B achieves 93.7% and 96.4% reductions in response-level and mentioned-level hallucination respectively on the Object HalBench, significantly outperforming current state-of-the-art methods.