🤖 AI Summary
To address the weak generalization and narrow task coverage of multimodal large language models (MLLMs) in long-video temporal understanding—such as temporal localization, action detection, and temporal question answering—this paper proposes the first multi-task reinforcement learning framework tailored for diverse temporal tasks. Our method introduces: (1) a fine-grained task taxonomy covering three types of temporal correspondences—point-to-point, point-to-segment, and segment-to-segment; (2) task-specific, temporal-aware reward functions; and (3) cross-task collaborative optimization via Group Relative Policy Optimization (GRPO). Evaluated on multiple long-video benchmarks, our approach significantly outperforms state-of-the-art methods, achieving simultaneous improvements in both single-task performance and cross-task generalization. These results empirically validate the effectiveness of jointly modeling temporal structure and task semantics.
📝 Abstract
Enhancing the temporal understanding of Multimodal Large Language Models (MLLMs) is essential for advancing long-form video analysis, enabling tasks such as temporal localization, action detection, and time-sensitive question answering. While reinforcement learning (RL) has recently been explored for improving temporal reasoning, existing approaches are often confined to limited task types and data, restricting their generalization across diverse temporal understanding scenarios. To address this challenge, we present TempR1, a temporal-aware multi-task reinforcement learning framework that systematically strengthens MLLMs'temporal comprehension. We curate a multi-task corpus that exposes the model to diverse temporal structures and semantics, and build upon the Group Relative Policy Optimization (GRPO) algorithm to achieve stable and effective cross-task optimization. Specifically, we categorize temporal tasks into three correspondence types between predicted intervals and ground-truth instances, and design tailored localization rewards for each, enabling TempR1 to capture fine-grained temporal dependencies and adapt to different temporal patterns. Extensive experiments demonstrate that TempR1 attains state-of-the-art performance across multiple benchmarks. Moreover, its joint optimization over complementary tasks yields a strong synergistic effect, enhancing both generalization and single-task performance, establishing a scalable and principled paradigm for temporal reasoning in MLLMs.