π€ AI Summary
Existing multimodal large language models struggle to model implicit causal relationships between events in video reasoning, often leading to hallucinations. To address this limitation, this work proposes the first approach that explicitly incorporates event-level causal graph structures into the reasoning process by constructing an Event-level Video Scene Graph (EVSG) as an intermediate representation. The method further integrates reinforcement-based fine-tuning with a visual attention reward mechanism to enhance the modelβs grounding in video content. Evaluated on the RexTime and VidHalluc datasets, the proposed approach significantly improves event localization accuracy and understanding of event-object relationships, effectively mitigating reasoning hallucinations.
π Abstract
Video reasoning requires understanding the causal relationships between events in a video. However, such relationships are often implicit and costly to annotate manually. While existing multimodal large language models (MLLMs) often infer event relations through dense captions or video summaries for video reasoning, such modeling still lacks causal understanding. Without explicit causal structure modeling within and across video events, these models suffer from hallucinations during the video reasoning. In this work, we propose GraphThinker, a reinforcement finetuning-based method that constructs structural event-level scene graphs and enhances visual grounding to jointly reduce hallucinations in video reasoning. Specifically, we first employ an MLLM to construct an event-based video scene graph (EVSG) that explicitly models both intra- and inter-event relations, and incorporate these formed scene graphs into the MLLM as an intermediate thinking process. We also introduce a visual attention reward during reinforcement finetuning, which strengthens video grounding and further mitigates hallucinations. We evaluate GraphThinker on two datasets, RexTime and VidHalluc, where it shows superior ability to capture object and event relations with more precise event localization, reducing hallucinations in video reasoning compared to prior methods.