RLBFF: Binary Flexible Feedback to bridge between Human Feedback & Verifiable Rewards

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor interpretability and vulnerability to reward hacking in RLHF, as well as the limited generalizability and narrow focus on correctness in RLVR, this paper proposes RLBFF—a framework that automatically extracts binary-evaluable principles (e.g., “conciseness”, “hallucination-free”) from natural language feedback to construct fine-grained, interpretable entailment-based reward models. These models support dynamic specification of evaluation dimensions at inference time. By integrating human preference diversity with rule-based verification precision, RLBFF enables principle-level controllable alignment. On RM-Bench and JudgeBench, RLBFF achieves 86.2% and 81.4% accuracy—both state-of-the-art—while aligning Qwen3-32B at less than 5% computational cost relative to standard methods. Its performance matches that of o3-mini and DeepSeek R1, demonstrating substantial improvements in reward modeling flexibility, controllability, and generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) are the main RL paradigms used in LLM post-training, each offering distinct advantages. However, RLHF struggles with interpretability and reward hacking because it relies on human judgments that usually lack explicit criteria, whereas RLVR is limited in scope by its focus on correctness-based verifiers. We propose Reinforcement Learning with Binary Flexible Feedback (RLBFF), which combines the versatility of human-driven preferences with the precision of rule-based verification, enabling reward models to capture nuanced aspects of response quality beyond mere correctness. RLBFF extracts principles that can be answered in a binary fashion (e.g. accuracy of information: yes, or code readability: no) from natural language feedback. Such principles can then be used to ground Reward Model training as an entailment task (response satisfies or does not satisfy an arbitrary principle). We show that Reward Models trained in this manner can outperform Bradley-Terry models when matched for data and achieve top performance on RM-Bench (86.2%) and JudgeBench (81.4%, #1 on leaderboard as of September 24, 2025). Additionally, users can specify principles of interest at inference time to customize the focus of our reward models, in contrast to Bradley-Terry models. Finally, we present a fully open source recipe (including data) to align Qwen3-32B using RLBFF and our Reward Model, to match or exceed the performance of o3-mini and DeepSeek R1 on general alignment benchmarks of MT-Bench, WildBench, and Arena Hard v2 (at <5% of the inference cost).
Problem

Research questions and friction points this paper is trying to address.

Bridging interpretability gap between human feedback and verifiable rewards
Overcoming limitations of human judgment lacking explicit criteria in RLHF
Expanding scope beyond correctness-based verification used in RLVR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines human preferences with rule-based binary verification
Trains reward models as entailment tasks using principles
Allows customizable principle specification at inference time