Enhancing Rating-Based Reinforcement Learning to Effectively Leverage Feedback from Large Vision-Language Models

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manually designing reward functions in reinforcement learning (RL) is costly and poorly scalable, while human-in-the-loop RL approaches suffer from expensive and inefficient human annotations. Method: This paper proposes an absolute trajectory scoring framework leveraging large vision-language models (VLMs), replacing conventional pairwise comparisons with single-step VLM-based absolute scoring to enable autonomous reward learning with minimal human intervention. It introduces three key components: (1) a trajectory-level scoring query interface, (2) a VLM feedback distillation mechanism, and (3) a robust reward modeling approach resilient to data imbalance and noisy labels. Results: Evaluated across diverse control tasks, the method substantially outperforms existing VLM-based feedback approaches, achieving higher sample efficiency and improved training stability. It is the first work to empirically validate the feasibility and effectiveness of AI-generated absolute scores as a foundation for reward learning in RL.

Technology Category

Application Category

📝 Abstract
Designing effective reward functions remains a fundamental challenge in reinforcement learning (RL), as it often requires extensive human effort and domain expertise. While RL from human feedback has been successful in aligning agents with human intent, acquiring high-quality feedback is costly and labor-intensive, limiting its scalability. Recent advancements in foundation models present a promising alternative--leveraging AI-generated feedback to reduce reliance on human supervision in reward learning. Building on this paradigm, we introduce ERL-VLM, an enhanced rating-based RL method that effectively learns reward functions from AI feedback. Unlike prior methods that rely on pairwise comparisons, ERL-VLM queries large vision-language models (VLMs) for absolute ratings of individual trajectories, enabling more expressive feedback and improved sample efficiency. Additionally, we propose key enhancements to rating-based RL, addressing instability issues caused by data imbalance and noisy labels. Through extensive experiments across both low-level and high-level control tasks, we demonstrate that ERL-VLM significantly outperforms existing VLM-based reward generation methods. Our results demonstrate the potential of AI feedback for scaling RL with minimal human intervention, paving the way for more autonomous and efficient reward learning.
Problem

Research questions and friction points this paper is trying to address.

Designing effective reward functions in reinforcement learning
Reducing reliance on costly human feedback in RL
Improving reward learning with AI-generated feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages VLM absolute ratings for expressive feedback
Enhances rating-based RL for stability and efficiency
Reduces human dependency with AI-generated reward functions
🔎 Similar Papers
No similar papers found.
T
Tung Minh Luu
Korea Advanced Institute of Science and Technology (KAIST)
Y
Younghwan Lee
Korea Advanced Institute of Science and Technology (KAIST)
D
Donghoon Lee
Korea Advanced Institute of Science and Technology (KAIST)
Sunho Kim
Sunho Kim
Samsung Advanced Institute of Technology
AIcomputer vision
M
Min Jun Kim
Korea Advanced Institute of Science and Technology (KAIST)
Chang D. Yoo
Chang D. Yoo
kaist
machine learningcomputer visionsignal processingspeech enhancementspeech recognition