🤖 AI Summary
This work addresses the complex spatiotemporal reasoning challenges posed by dynamic 4D environments in egocentric videos—such as interaction counting, relative localization, trajectory tracking, and static object localization—by proposing EgoReasoner, a two-stage framework. EgoReasoner is the first to integrate task-specific cognitive structures into the reasoning process, leveraging task-adaptive thought templates to guide structured chains of thought and employing a task-aware reward function for GRPO-based reinforcement fine-tuning. This approach achieves precise entity grounding, temporal alignment, and logical consistency. Evaluated on the HD-EPIC benchmark, a 3B-parameter model trained on only 16K samples attains an average accuracy of 37.5%, substantially outperforming Qwen2.5-VL-7B (25.7%) by over 10 percentage points.
📝 Abstract
Egocentric video understanding is inherently complex due to the dynamic 4D nature of the environment, where camera motion and object displacements necessitate a continuous re-evaluation of spatial relations. In this work, we target a suite of under-explored egocentric 4D reasoning tasks, including fixture interaction counting, viewpoint-relative fixture location, object movement itinerary tracking, and stationary object localization, that require fundamentally different cognitive operations: spatial anchoring, temporal tracking, and duration reasoning. We observe that these structural differences make task-agnostic approaches insufficient: generic Chain-of-Thought methods lack task-appropriate reasoning primitives, and uniform reinforcement learning actively destabilizes performance on spatial tasks. To address this, we propose EgoReasoner, a two-stage framework that aligns both the reasoning scaffold and the reward signal to each task's cognitive structure. In the first stage, Task-Adaptive Thinking Templates guide the synthesis of structured CoT traces that teach the model to reason adaptively across task types via supervised fine-tuning. In the second stage, task-aware reward functions verify entity grounding, temporal alignment, and task-adaptive logical consistency, selectively strengthening each reasoning pathway via reinforcement fine-tuning with GRPO. Our 3B-parameter model, trained on only 16K samples, achieves 37.5% average accuracy on the challenging HD-EPIC benchmark, surpassing Qwen2.5-VL-7B (25.7%) by over 10 points.