ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of preference data annotation in reinforcement learning from human feedback (RLHF), particularly in low-resource and expert domains, by proposing a modular active learning framework. The framework dynamically selects the most informative response pairs for annotation based on uncertainty estimation and introduces two novel sampling strategies—Double Reverse Thompson Sampling and DeltaUCB—that prioritize pairs where the model exhibits significant discrepancies in predicted quality. Experimental results demonstrate that the approach achieves comparable or superior downstream task performance using only one-sixth of the annotation budget required by static baselines, substantially improving data efficiency and model alignment.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) has become the standard for aligning Large Language Models (LLMs), yet its efficacy is bottlenecked by the high cost of acquiring preference data, especially in low-resource and expert domains. To address this, we introduce ACTIVEULTRAFEEDBACK, a modular active learning pipeline that leverages uncertainty estimates to dynamically identify the most informative responses for annotation. Our pipeline facilitates the systematic evaluation of standard response selection methods alongside DOUBLE REVERSE THOMPSON SAMPLING (DRTS) and DELTAUCB, two novel methods prioritizing response pairs with large predicted quality gaps, leveraging recent results showing that such pairs provide good signals for fine-tuning. Our experiments demonstrate that ACTIVEULTRAFEEDBACK yields high-quality datasets that lead to significant improvements in downstream performance, notably achieving comparable or superior results with as little as one-sixth of the annotated data relative to static baselines. Our pipeline is available at https://github.com/lasgroup/ActiveUltraFeedback and our preference datasets at https://huggingface.co/ActiveUltraFeedback.
Problem

Research questions and friction points this paper is trying to address.

Preference Data Generation
Active Learning
Reinforcement Learning from Human Feedback
Large Language Models
Annotation Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active Learning
Preference Data Generation
Reinforcement Learning from Human Feedback
Uncertainty Estimation
Response Pair Selection
🔎 Similar Papers
No similar papers found.