🤖 AI Summary
This work addresses action-scene hallucination in Video-LLMs—spurious associations between actions and scenes caused by entangled spatiotemporal feature representations. To mitigate this, we propose a dual-component method: (1) DST-Attention, which explicitly decouples spatial and temporal token interactions, and (2) Harmonic-RoPE, a unified positional encoding scheme that harmonizes text, spatial, and temporal token embeddings. To rigorously evaluate hallucination, we introduce UNSCENE, the first benchmark dedicated to action-scene hallucination detection, comprising 1,320 videos and 4,078 question-answer pairs. Experiments demonstrate that our approach achieves state-of-the-art performance on UNSCENE, significantly outperforming existing methods. Moreover, it substantially improves robustness and accuracy across mainstream video understanding tasks—including video QA, captioning, and reasoning—validating that explicit spatiotemporal disentanglement effectively alleviates vision-language hallucination.
📝 Abstract
In this work, we tackle action-scene hallucination in Video Large Language Models (Video-LLMs), where models incorrectly predict actions based on the scene context or scenes based on observed actions. We observe that existing Video-LLMs often suffer from action-scene hallucination due to two main factors. First, existing Video-LLMs intermingle spatial and temporal features by applying an attention operation across all tokens. Second, they use the standard Rotary Position Embedding (RoPE), which causes the text tokens to overemphasize certain types of tokens depending on their sequential orders. To address these issues, we introduce MASH-VLM, Mitigating Action-Scene Hallucination in Video-LLMs through disentangled spatial-temporal representations. Our approach includes two key innovations: (1) DST-attention, a novel attention mechanism that disentangles the spatial and temporal tokens within the LLM by using masked attention to restrict direct interactions between the spatial and temporal tokens; (2) Harmonic-RoPE, which extends the dimensionality of the positional IDs, allowing the spatial and temporal tokens to maintain balanced positions relative to the text tokens. To evaluate the action-scene hallucination in Video-LLMs, we introduce the UNSCENE benchmark with 1,320 videos and 4,078 QA pairs. Extensive experiments demonstrate that MASH-VLM achieves state-of-the-art results on the UNSCENE benchmark, as well as on existing video understanding benchmarks.