VARP: Reinforcement Learning from Vision-Language Model Feedback with Agent Regularized Preferences

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address reward hacking and policy misalignment arising from poorly designed reward functions in continuous-control robotic tasks, this paper proposes a trajectory-augmented, policy-aware preference learning framework. Methodologically: (1) we introduce trajectory sketch overlay—a novel technique that encodes motion trajectories into VLM input images—to enhance the model’s temporal behavioral discrimination; and (2) we incorporate policy-performance-aware reward regularization to jointly optimize the reward model and current policy. Evaluated on Meta-World, our approach achieves 70–80% task success rates (vs. <50% for baselines), improves preference labeling accuracy by 15–20%, and increases episode returns in locomotion tasks by 20–30%. This work is the first to explicitly integrate trajectory-level temporal information into VLM-based feedback and to establish a co-optimization mechanism between reward learning and policy evolution—significantly improving the reliability and scalability of preference-based reinforcement learning for complex robotic control.

Technology Category

Application Category

📝 Abstract
Designing reward functions for continuous-control robotics often leads to subtle misalignments or reward hacking, especially in complex tasks. Preference-based RL mitigates some of these pitfalls by learning rewards from comparative feedback rather than hand-crafted signals, yet scaling human annotations remains challenging. Recent work uses Vision-Language Models (VLMs) to automate preference labeling, but a single final-state image generally fails to capture the agent's full motion. In this paper, we present a two-part solution that both improves feedback accuracy and better aligns reward learning with the agent's policy. First, we overlay trajectory sketches on final observations to reveal the path taken, allowing VLMs to provide more reliable preferences-improving preference accuracy by approximately 15-20% in metaworld tasks. Second, we regularize reward learning by incorporating the agent's performance, ensuring that the reward model is optimized based on data generated by the current policy; this addition boosts episode returns by 20-30% in locomotion tasks. Empirical studies on metaworld demonstrate that our method achieves, for instance, around 70-80% success rate in all tasks, compared to below 50% for standard approaches. These results underscore the efficacy of combining richer visual representations with agent-aware reward regularization.
Problem

Research questions and friction points this paper is trying to address.

Improves feedback accuracy in reinforcement learning using trajectory sketches.
Aligns reward learning with agent's policy through performance regularization.
Enhances success rates in complex tasks by combining visual and agent-aware methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Overlay trajectory sketches for improved VLM feedback
Regularize reward learning with agent performance data
Combine visual representations with agent-aware regularization
🔎 Similar Papers
No similar papers found.