TempR1: Improving Temporal Understanding of MLLMs via Temporal-Aware Multi-Task Reinforcement Learning

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization and narrow task coverage of multimodal large language models (MLLMs) in long-video temporal understanding—such as temporal localization, action detection, and temporal question answering—this paper proposes the first multi-task reinforcement learning framework tailored for diverse temporal tasks. Our method introduces: (1) a fine-grained task taxonomy covering three types of temporal correspondences—point-to-point, point-to-segment, and segment-to-segment; (2) task-specific, temporal-aware reward functions; and (3) cross-task collaborative optimization via Group Relative Policy Optimization (GRPO). Evaluated on multiple long-video benchmarks, our approach significantly outperforms state-of-the-art methods, achieving simultaneous improvements in both single-task performance and cross-task generalization. These results empirically validate the effectiveness of jointly modeling temporal structure and task semantics.

Technology Category

Application Category

📝 Abstract
Enhancing the temporal understanding of Multimodal Large Language Models (MLLMs) is essential for advancing long-form video analysis, enabling tasks such as temporal localization, action detection, and time-sensitive question answering. While reinforcement learning (RL) has recently been explored for improving temporal reasoning, existing approaches are often confined to limited task types and data, restricting their generalization across diverse temporal understanding scenarios. To address this challenge, we present TempR1, a temporal-aware multi-task reinforcement learning framework that systematically strengthens MLLMs'temporal comprehension. We curate a multi-task corpus that exposes the model to diverse temporal structures and semantics, and build upon the Group Relative Policy Optimization (GRPO) algorithm to achieve stable and effective cross-task optimization. Specifically, we categorize temporal tasks into three correspondence types between predicted intervals and ground-truth instances, and design tailored localization rewards for each, enabling TempR1 to capture fine-grained temporal dependencies and adapt to different temporal patterns. Extensive experiments demonstrate that TempR1 attains state-of-the-art performance across multiple benchmarks. Moreover, its joint optimization over complementary tasks yields a strong synergistic effect, enhancing both generalization and single-task performance, establishing a scalable and principled paradigm for temporal reasoning in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Improving temporal understanding in Multimodal Large Language Models
Addressing limited generalization across diverse temporal tasks
Enhancing temporal localization and reasoning via multi-task reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task reinforcement learning for temporal comprehension
Tailored localization rewards for different temporal patterns
Group Relative Policy Optimization for cross-task stability
🔎 Similar Papers
No similar papers found.
T
Tao Wu
Nanjing University
L
Li Yang
ByteDance Inc.
G
Gen Zhan
ByteDance Inc.
Y
Yahin Zhang
ByteDance Inc.
Yiting Liao
Yiting Liao
Staff Research Scientist at Wireless Communications Research, Intel Labs
Video ProcessingVideo CommunicationsVideo Understanding
Junlin Li
Junlin Li
ByteDance Inc. - Georgia Institute of Technology - Tsinghua University
Video Compression and ProcessingVideo StreamingMachine LearningAIASIC Design
D
Deliang Fu
ByteDance Inc.
L
Li Zhang
ByteDance Inc.
L
Limin Wang
Nanjing University, Shanghai AI Lab