π€ AI Summary
This work addresses the challenge of visual anchor drift in multimodal large language models (MLLMs) for video question answering, where critical frames are often overlooked, leading to erroneous reasoning and hallucinations. To mitigate this issue, the authors propose FrameRepeat, a framework that employs a lightweight frame-scoring module to automatically identify key frames requiring reinforcement. Coupled with an Add-One-In (AOI) self-supervised training strategy, FrameRepeat leverages the modelβs own output probabilities to generate supervision signals that guide dynamic frame repetition. Notably, this approach achieves a general-purpose frame repetition mechanism without modifying the underlying model architecture, effectively alleviating visual forgetting and enhancing reliance on original visual cues at minimal computational cost. Extensive experiments demonstrate that FrameRepeat consistently improves accuracy and suppresses hallucinations across multiple MLLMs and benchmark datasets, confirming its efficacy and strong generalization capability.
π Abstract
Recently, Multimodal Large Language Models (MLLMs) have demonstrated significant potential in complex visual tasks through the integration of Chain-of-Thought (CoT) reasoning. However, in Video Question Answering, extended thinking processes do not consistently yield performance gains and may even lead to degradation due to ``visual anchor drifting'', where models increasingly rely on self-generated text, sidelining visual inputs and causing hallucinations. While existing mitigations typically introduce specific mechanisms for the model to re-attend to visual inputs during inference, these approaches often incur prohibitive training costs and suffer from poor generalizability across different architectures. To address this, we propose FrameRepeat, an automated enhancement framework which features a lightweight repeat scoring module that enables Video-LLMs to autonomously identify which frames should be reinforced. We introduce a novel training strategy, Add-One-In (AOI), that uses MLLM output probabilities to generate supervision signals representing repeat gain. This can be used to train a frame scoring network, which guides the frame repetition behavior. Experimental results across multiple models and datasets demonstrate that FrameRepeat is both effective and generalizable in strengthening important visual cues during the reasoning process.