๐ค AI Summary
Large vision-language models (VLMs) frequently generate hallucinated descriptions inconsistent with visual content, undermining cross-modal alignment in video-language tasks. To address this, we propose HACA, a self-training framework that explicitly models hallucination correction as an alignment learning signalโrather than merely suppressing hallucinations. HACA introduces a video-text inconsistency detection and reconstruction module to identify and rectify descriptive deviations, enabling self-driven consistency optimization without additional annotations. The method integrates contrastive learning with cross-modal attention to strengthen fine-grained spatiotemporal alignment. Evaluated on MSR-VTT and YouCook2 benchmarks, HACA achieves significant improvements in video-caption binding and text-to-video retrieval, yielding an average +3.2% gain in R@1. These results validate the effectiveness and generalizability of leveraging hallucination correction as a principled driver for cross-modal alignment.
๐ Abstract
Large Vision-Language Models often generate hallucinated content that is not grounded in its visual inputs. While prior work focuses on mitigating hallucinations, we instead explore leveraging hallucination correction as a training objective to improve video-language alignment. We introduce HACA, a self-training framework learning to correct hallucinations in descriptions that do not align with the video content. By identifying and correcting inconsistencies, HACA enhances the model's ability to align video and textual representations for spatio-temporal reasoning. Our experimental results show consistent gains in video-caption binding and text-to-video retrieval tasks, demonstrating that hallucination correction-inspired tasks serve as an effective strategy for improving vision and language alignment.