Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and limited scalability of preference-based reinforcement learning, which typically relies on extensive human feedback. The authors propose ROVED, a novel framework that synergistically integrates vision-language embeddings with selective human annotation. ROVED leverages a vision-language model to generate segment-level preferences and employs an uncertainty-aware mechanism to request human labels only for highly uncertain samples. Concurrently, it utilizes parameter-efficient fine-tuning to continually refine the embeddings. This approach dramatically reduces annotation burden—cutting human queries by 80% and total labeling costs by 90% across multiple robotic manipulation tasks—while matching or exceeding the performance of state-of-the-art methods. ROVED thus offers a scalable, accurate, and cross-task generalizable solution for preference learning in robotics.
📝 Abstract
Preference-based reinforcement learning can learn effective reward functions from comparisons, but its scalability is constrained by the high cost of oracle feedback. Lightweight vision-language embedding (VLE) models provide a cheaper alternative, but their noisy outputs limit their effectiveness as standalone reward generators. To address this challenge, we propose ROVED, a hybrid framework that combines VLE-based supervision with targeted oracle feedback. Our method uses the VLE to generate segment-level preferences and defers to an oracle only for samples with high uncertainty, identified through a filtering mechanism. In addition, we introduce a parameter-efficient fine-tuning method that adapts the VLE with the obtained oracle feedback in order to improve the model over time in a synergistic fashion. This ensures the retention of the scalability of embeddings and the accuracy of oracles, while avoiding their inefficiencies. Across multiple robotic manipulation tasks, ROVED matches or surpasses prior preference-based methods while reducing oracle queries by up to 80%. Remarkably, the adapted VLE generalizes across tasks, yielding cumulative annotation savings of up to 90%, highlighting the practicality of combining scalable embeddings with precise oracle supervision for preference-based RL.
Problem

Research questions and friction points this paper is trying to address.

preference-based reinforcement learning
oracle feedback
vision-language embeddings
reward learning
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

preference-based reinforcement learning
vision-language embeddings
oracle feedback reduction
uncertainty-aware filtering
parameter-efficient fine-tuning
🔎 Similar Papers
No similar papers found.