Active Query Selection for Crowd-Based Reinforcement Learning

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Preference-based reinforcement learning (PbRL) suffers from limited performance due to the scarcity, high cost, and unreliability of human expert feedback. Method: This paper proposes a multi-annotator active preference learning framework. Its core components are: (1) an extended Advise algorithm for online estimation of annotator reliability; (2) an entropy-based, uncertainty-aware querying strategy that actively selects the most informative trajectory pairs for annotation; and (3) a probabilistic crowd modeling approach integrated with preference learning to enhance policy optimization efficiency under sparse feedback. Results: Extensive evaluation across diverse domains—including 2D navigation tasks and the UVA/Padova simulator for type-1 diabetes management—demonstrates significantly accelerated convergence. Notably, the method outperforms existing baselines in clinical simulation, validating both the effectiveness and generalizability of uncertainty-driven feedback acquisition.

Technology Category

Application Category

📝 Abstract
Preference-based reinforcement learning has gained prominence as a strategy for training agents in environments where the reward signal is difficult to specify or misaligned with human intent. However, its effectiveness is often limited by the high cost and low availability of reliable human input, especially in domains where expert feedback is scarce or errors are costly. To address this, we propose a novel framework that combines two complementary strategies: probabilistic crowd modelling to handle noisy, multi-annotator feedback, and active learning to prioritize feedback on the most informative agent actions. We extend the Advise algorithm to support multiple trainers, estimate their reliability online, and incorporate entropy-based query selection to guide feedback requests. We evaluate our approach in a set of environments that span both synthetic and real-world-inspired settings, including 2D games (Taxi, Pacman, Frozen Lake) and a blood glucose control task for Type 1 Diabetes using the clinically approved UVA/Padova simulator. Our preliminary results demonstrate that agents trained with feedback on uncertain trajectories exhibit faster learning in most tasks, and we outperform the baselines for the blood glucose control task.
Problem

Research questions and friction points this paper is trying to address.

Reducing human feedback cost in reinforcement learning
Handling noisy multi-annotator preference feedback
Prioritizing feedback on most informative agent actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic crowd modeling for noisy feedback
Active learning prioritizes most informative actions
Entropy-based query selection guides feedback requests
🔎 Similar Papers
No similar papers found.