Enhancing Vision-Language Compositional Understanding with Multimodal Synthetic Data

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit limited compositional reasoning capability, primarily due to insufficient high-quality multimodal pairs in training data that capture fine-grained semantic variations. Method: We propose SVD-GT, a framework leveraging text-to-image generative models to synthesize high-fidelity variant samples. It innovatively injects image features into the text-conditioned generation process to enhance cross-modal fine-grained alignment; introduces an adaptive margin contrastive loss to dynamically select hard negatives and suppress mismatched samples; and jointly optimizes cross-modal alignment and compositional generalization. Contribution/Results: On four compositional reasoning benchmarks, SVD-GT improves CLIP’s average accuracy by over 8%, outperforming prior state-of-the-art methods by up to 2% on three benchmarks. This work is the first to synergistically integrate controllable synthesis, feature-guided generation, and hard-negative-aware contrastive learning for enhancing VLM compositionality—establishing a novel data-driven paradigm for semantic structural modeling.

Technology Category

Application Category

📝 Abstract
Despite impressive advancements in various multimodal tasks, vision-language models (VLMs) still struggle with compositional understanding due to limited exposure to training samples that contain subtle variations within paired examples. With advances in multimodal generative models, a natural solution is to generate synthetic samples with subtle variations for training VLMs. However, generating and training on synthetic samples with subtle variations presents two challenges: difficulty in accurately creating precise variations and inconsistency in cross-modal alignment quality. To address these challenges, we propose SVD-GT (Subtle Variation Data Generation and Training), which integrates image feature injection into a text-to-image generative model to enhance the quality of synthetic variations and employs an adaptive margin loss to differentiate samples using adaptive margins, which help filter out potentially incorrect synthetic samples and focus the learning on informative hard samples. Evaluations on four compositional understanding benchmarks demonstrate that SVD-GT significantly improves the compositionality of VLMs, boosting the average accuracy of CLIP by over 8% across all benchmarks and outperforming state-of-the-art methods by 2% on three benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Enhance vision-language models' compositional understanding.
Generate synthetic data with subtle variations for training.
Improve cross-modal alignment quality in synthetic samples.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates image feature injection for synthetic data
Uses adaptive margin loss for sample differentiation
Enhances VLM compositionality with SVD-GT technique
🔎 Similar Papers
No similar papers found.