RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation

📅 2026-01-13
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that open-ended generation tasks lack verifiable, fine-grained scoring rubrics, which limits the effectiveness of rule-based reinforcement learning. To overcome this, the authors propose an automated coarse-to-fine rubric generation framework that leverages principle-guided synthesis, multi-model aggregation, and a difficulty evolution mechanism to construct high-quality, highly discriminative scoring criteria. This framework enables the first large-scale, multi-domain, fine-grained, and scalable automatic evaluation system. Integrating the generated rubrics with Rejection Sampling Fine-Tuning (RuFT) and Rubric-guided Reinforcement Learning (RuRL), a Qwen3-14B model trained on RubricHub achieves a score of 69.3 on HealthBench, surpassing closed-source models such as GPT-5 and establishing a new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has driven substantial progress in reasoning-intensive domains like mathematics. However, optimizing open-ended generation remains challenging due to the lack of ground truth. While rubric-based evaluation offers a structured proxy for verification, existing methods suffer from scalability bottlenecks and coarse criteria, resulting in a supervision ceiling effect. To address this, we propose an automated Coarse-to-Fine Rubric Generation framework. By synergizing principle-guided synthesis, multi-model aggregation, and difficulty evolution, our approach produces comprehensive and highly discriminative criteria capable of capturing the subtle nuances. Based on this framework, we introduce RubricHub, a large-scale ($\sim$110k) and multi-domain dataset. We validate its utility through a two-stage post-training pipeline comprising Rubric-based Rejection Sampling Fine-Tuning (RuFT) and Reinforcement Learning (RuRL). Experimental results demonstrate that RubricHub unlocks significant performance gains: our post-trained Qwen3-14B achieves state-of-the-art (SOTA) results on HealthBench (69.3), surpassing proprietary frontier models such as GPT-5. Our code is available at \href{https://github.com/teqkilla/RubricHub}{ this URL}.
Problem

Research questions and friction points this paper is trying to address.

open-ended generation
rubric-based evaluation
scalability bottleneck
coarse criteria
supervision ceiling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coarse-to-Fine Rubric Generation
RubricHub
Reinforcement Learning with Verifiable Rewards
Multi-model Aggregation
Difficulty Evolution
🔎 Similar Papers
No similar papers found.