APPO: Attention-guided Perception Policy Optimization for Video Reasoning

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation that current video reasoning performance is primarily constrained by fine-grained perceptual capabilities rather than high-level reasoning, with perception enhancement typically relying on costly fine-grained annotations. To overcome this, the authors propose Attention-guided Perceptual Policy Optimization (APPO), an algorithm that, for the first time, reveals the dominant role of perception in video reasoning. APPO introduces a token-level dense reward mechanism to optimize intra-group perceptual tokens within critical video frames without requiring additional annotations. The method is compatible with models of varying scales (3B/7B) and consistently outperforms GRPO and DAPO across multiple video benchmarks, achieving performance gains of 0.5% to 4%, thereby demonstrating its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Complex video reasoning, actually, relies excessively on fine-grained perception rather than on expert (e.g., Ph.D, Science)-level reasoning. Through extensive empirical observation, we have recognized the critical impact of perception. In particular, when perception ability is almost fixed, enhancing reasoning from Qwen3-8B to OpenAI-o3 yields only 0.7% performance improvement. Conversely, even minimal change in perception model scale (from 7B to 32B) boosts performance by 1.4%, indicating enhancing perception, rather than reasoning, is more critical to improve performance. Therefore, exploring how to enhance perception ability through reasoning without the need for expensive fine-grained annotation information is worthwhile. To achieve this goal, we specially propose APPO, the Attention-guided Perception Policy Optimization algorithm that leverages token-level dense rewards to improve model's fine-grained perception. The core idea behind APPO is to optimize those tokens from different responses that primarily focus on the same crucial video frame (called intra-group perception tokens). Experimental results on diverse video benchmarks and models with different scales (3/7B) demonstrate APPO consistently outperforms GRPO and DAPO (0.5%~4%). We hope our work provides a promising approach to effectively enhance model's perception abilities through reasoning in a low-cost manner, serving diverse scenarios and demands.
Problem

Research questions and friction points this paper is trying to address.

video reasoning
perception enhancement
fine-grained perception
annotation-free learning
model perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

perception enhancement
attention-guided optimization
video reasoning
token-level reward
annotation-free learning
🔎 Similar Papers
No similar papers found.
H
Henghui Du
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing; AI Technology Center, Online Video Business Unit, Tencent PCG
C
Chang Zhou
AI Technology Center, Online Video Business Unit, Tencent PCG
Xi Chen
Xi Chen
Tencent Inc.
Natural Language ProcessingKnowledge GraphMachine Learning
Di Hu
Di Hu
Tenure-track Associate Professor, Renmin University of China
Multimodal PerceptionMultimodal LearningMultimodal Interaction