Synth-Align: Improving Trustworthiness in Vision-Language Model with Synthetic Preference Data Alignment

📅 2024-12-23
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive hallucination in large vision-language models (LVLMs) during cross-modal generation, this paper proposes a post-training alignment framework leveraging synthetically generated human preference data. The method introduces two key innovations: (1) the first controllable synthetic preference data generation mechanism specifically designed for LVLMs; and (2) the first use of a learnable reward model—replacing manual annotations or fixed metrics (e.g., CLIP)—as a human preference proxy, enabling teacher-free multimodal Direct Preference Optimization (DPO). Evaluated on LLaVA-1.5-7B, the approach achieves 87.6% accuracy and 97.8% precision on POPE, improves MMHal-Bench score from 2.36 to 3.49, and reduces hallucination rate by 50.98% (from 51.0% to 25.0%).

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) have shown promising capabilities in understanding and generating information by integrating both visual and textual data. However, current models are still prone to hallucinations, which degrade the performance and greatly harm the user experience in real-world applications. Post-training alignment, particularly preference-tuning, is intended to align model outputs and behaviors (safety, instruction-following, style), ensuring robustness and adaptability to a wide range of tasks. The use of synthetic data for alignment, particularly in multimodal settings, remains under explored. Existing approaches typically use a strong model or a ground-truth model (CLIP) to determine positive and negative image-text data points. This paper proposes SynthAlign, a pipeline to generate and collect synthetic human-preference image-text data with optimal control built specifically for post-training alignment with DPO. At the core of the framework is the utilization of reward models as a proxy of human preference. A series of evaluation and benchmarking is provided to validate the effectiveness of the proposed framework and the resulting dataset. Notably, our framework enhanced LLaVA-1.5-7B achieved substantial POPE improvements: 87.6% accuracy and 97.8% precision, MMHal-Bench score increased from 2.36 to 3.49, and hallucination rate decreased from 51.0% to 25.0% (a 50.98% relative reduction).
Problem

Research questions and friction points this paper is trying to address.

LVLMs suffer from hallucinations degrading real-world performance
Synthetic preference data for multimodal alignment remains under explored
Existing methods rely on strong models for preference determination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates synthetic human-preference image-text data
Uses reward models as proxy for human preference
Applies DPO for post-training alignment of models