CounterVid: Counterfactual Video Generation for Mitigating Action and Temporal Hallucinations in Video-Language Models

πŸ“… 2026-01-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the tendency of video-language models to generate hallucinations in action recognition and temporal reasoning due to overreliance on linguistic priors. To mitigate this, the authors propose CounterVid, the first large-scale counterfactual video synthesis framework specifically designed for action- and temporality-related hallucinations. The framework leverages multimodal large language models to guide action editing and combines image and video diffusion models to generate semantically hard negative samples that preserve scene consistency while altering actions or temporal order. This process yields the CounterVid dataset, comprising 26,000 preference pairs. Furthermore, the authors introduce MixDPO, a unified preference optimization strategy that jointly utilizes textual and visual signals to fine-tune Qwen2.5-VL, achieving significant performance gains on temporal ordering tasks and demonstrating strong generalization on standard video hallucination benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Video-language models (VLMs) achieve strong multimodal understanding but remain prone to hallucinations, especially when reasoning about actions and temporal order. Existing mitigation strategies, such as textual filtering or random video perturbations, often fail to address the root cause: over-reliance on language priors rather than fine-grained visual dynamics. We propose a scalable framework for counterfactual video generation that synthesizes videos differing only in actions or temporal structure while preserving scene context. Our pipeline combines multimodal LLMs for action proposal and editing guidance with diffusion-based image and video models to generate semantic hard negatives at scale. Using this framework, we build CounterVid, a synthetic dataset of ~26k preference pairs targeting action recognition and temporal reasoning. We further introduce MixDPO, a unified Direct Preference Optimization approach that jointly leverages textual and visual preferences. Fine-tuning Qwen2.5-VL with MixDPO yields consistent improvements, notably in temporal ordering, and transfers effectively to standard video hallucination benchmarks. Code and models will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

video-language models
action hallucinations
temporal hallucinations
multimodal understanding
visual dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual video generation
video-language models
temporal hallucination
diffusion models
Direct Preference Optimization
πŸ”Ž Similar Papers