Generalizable Dense Reward for Long-Horizon Robotic Tasks

πŸ“… 2026-03-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing robotic foundation policies, which suffer from distribution shift and error accumulation in long-horizon tasks, and whose reinforcement learning fine-tuning relies on handcrafted rewards with limited generalization. The authors propose VLLR, a novel framework that leverages large language models (LLMs) and vision-language models (VLMs) to decompose tasks and assess progress, thereby constructing an extrinsic reward without human intervention. Combined with an intrinsic reward derived from the policy’s self-confidence, VLLR establishes a universal dense reward mechanism to guide PPO fine-tuning. Through value function pre-warming and synergistic use of dual rewards, the method significantly enhances both sample efficiency and task success rates. On the CHORES benchmark, VLLR improves absolute success rate by 56% over the pretrained policy and outperforms the best existing RL fine-tuning approaches by 5% and 10% on in-distribution and out-of-distribution tasks, respectively.
πŸ“ Abstract
Existing robotic foundation policies are trained primarily via large-scale imitation learning. While such models demonstrate strong capabilities, they often struggle with long-horizon tasks due to distribution shift and error accumulation. While reinforcement learning (RL) can finetune these models, it cannot work well across diverse tasks without manual reward engineering. We propose VLLR, a dense reward framework combining (1) an extrinsic reward from Large Language Models (LLMs) and Vision-Language Models (VLMs) for task progress recognition, and (2) an intrinsic reward based on policy self-certainty. VLLR uses LLMs to decompose tasks into verifiable subtasks and then VLMs to estimate progress to initialize the value function for a brief warm-up phase, avoiding prohibitive inference cost during full training; and self-certainty provides per-step intrinsic guidance throughout PPO finetuning. Ablation studies reveal complementary benefits: VLM-based value initialization primarily improves task completion efficiency, while self-certainty primarily enhances success rates, particularly on out-of-distribution tasks. On the CHORES benchmark covering mobile manipulation and navigation, VLLR achieves up to 56% absolute success rate gains over the pretrained policy, up to 5% gains over state-of-the-art RL finetuning methods on in-distribution tasks, and up to $10\%$ gains on out-of-distribution tasks, all without manual reward engineering. Additional visualizations can be found in https://silongyong.github.io/vllr_project_page/
Problem

Research questions and friction points this paper is trying to address.

long-horizon tasks
distribution shift
reward engineering
robotic foundation policies
generalizable reward
Innovation

Methods, ideas, or system contributions that make the work stand out.

dense reward
large language models
vision-language models
reinforcement learning
self-certainty
πŸ”Ž Similar Papers
No similar papers found.