Recurrent Reasoning with Vision-Language Models for Estimating Long-Horizon Embodied Task Progress

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models struggle to balance complex temporal reasoning with computational efficiency in long-horizon embodied task progress estimation. To address this challenge, this work proposes R²VLM, a novel architecture featuring recursive reasoning and a dynamically evolving chain-of-thought mechanism. By iteratively processing local video segments while maintaining a coherent global context, R²VLM significantly reduces computational overhead on long videos without compromising strong reasoning capabilities. Built upon vision-language foundations, the model is trained on automatically generated data from ALFRED and Ego4D benchmarks. It achieves new state-of-the-art performance in task progress estimation and demonstrates superior effectiveness in downstream applications—including policy learning, reward modeling, and proactive assistance—highlighting its practical utility and robustness.

Technology Category

Application Category

📝 Abstract
Accurately estimating task progress is critical for embodied agents to plan and execute long-horizon, multi-step tasks. Despite promising advances, existing Vision-Language Models (VLMs) based methods primarily leverage their video understanding capabilities, while neglecting their complex reasoning potential. Furthermore, processing long video trajectories with VLMs is computationally prohibitive for real-world deployment. To address these challenges, we propose the Recurrent Reasoning Vision-Language Model ($\text{R}^2$VLM). Our model features a recurrent reasoning framework that processes local video snippets iteratively, maintaining a global context through an evolving Chain of Thought (CoT). This CoT explicitly records task decomposition, key steps, and their completion status, enabling the model to reason about complex temporal dependencies. This design avoids the high cost of processing long videos while preserving essential reasoning capabilities. We train $\text{R}^2$VLM on large-scale, automatically generated datasets from ALFRED and Ego4D. Extensive experiments on progress estimation and downstream applications, including progress-enhanced policy learning, reward modeling for reinforcement learning, and proactive assistance, demonstrate that $\text{R}^2$VLM achieves strong performance and generalization, achieving a new state-of-the-art in long-horizon task progress estimation. The models and benchmarks are publicly available at \href{https://huggingface.co/collections/zhangyuelin/r2vlm}{huggingface}.
Problem

Research questions and friction points this paper is trying to address.

task progress estimation
vision-language models
long-horizon tasks
embodied agents
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recurrent Reasoning
Vision-Language Model
Chain of Thought
Long-Horizon Task Progress
Embodied AI
Yuelin Zhang
Yuelin Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
MLLMGenmetric GNN
S
Sijie Cheng
RayNeo.AI; Department of Computer Science and Technology, Tsinghua University; Institute for AI Industry Research (AIR), Tsinghua University
C
Chen Li
Gaoling School of Artificial Intelligence, Renmin University of China; Beijing Key Laboratory of Research on Large Models and Intelligent Governance; Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE
Z
Zongzhao Li
Gaoling School of Artificial Intelligence, Renmin University of China; Beijing Key Laboratory of Research on Large Models and Intelligent Governance; Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE
Yuxin Huang
Yuxin Huang
Unknown affiliation
Yang Liu
Yang Liu
Tsinghua University
Wenbing Huang
Wenbing Huang
Associate Professor, Renmin University of China
Machine LearningAI for Science