DISCO Balances the Scales: Adaptive Domain- and Difficulty-Aware Reinforcement Learning on Imbalanced Data

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GRPO’s tendency to over-optimize dominant domains under multi-domain imbalanced data—thereby degrading generalization and fairness—this paper proposes DISCO, a domain- and difficulty-aware collaborative reward scaling method. DISCO employs dual adaptive normalization: (1) domain-aware reward scaling, calibrated via domain frequency statistics, mitigates distributional bias; and (2) difficulty-aware scaling, guided by self-consistency evaluation to dynamically identify high-value hard prompts, thereby strengthening alignment for sparse domains and challenging samples. Integrated seamlessly into the GRPO framework, DISCO requires no additional annotations or architectural modifications. Experiments on Qwen3 demonstrate that DISCO outperforms existing GRPO variants by 5% and achieves state-of-the-art performance on multi-domain alignment benchmarks. It notably enhances cross-domain fairness and generalization capability without increasing model complexity or annotation cost.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly aligned with human preferences through Reinforcement Learning from Human Feedback (RLHF). Among RLHF methods, Group Relative Policy Optimization (GRPO) has gained attention for its simplicity and strong performance, notably eliminating the need for a learned value function. However, GRPO implicitly assumes a balanced domain distribution and uniform semantic alignment across groups - assumptions that rarely hold in real-world datasets. When applied to multi-domain, imbalanced data, GRPO disproportionately optimizes for dominant domains, neglecting underrepresented ones and resulting in poor generalization and fairness. We propose Domain-Informed Self-Consistency Policy Optimization (DISCO), a principled extension to GRPO that addresses inter-group imbalance with two key innovations. Domain-aware reward scaling counteracts frequency bias by reweighting optimization based on domain prevalence. Difficulty-aware reward scaling leverages prompt-level self-consistency to identify and prioritize uncertain prompts that offer greater learning value. Together, these strategies promote more equitable and effective policy learning across domains. Extensive experiments across multiple LLMs and skewed training distributions show that DISCO improves generalization, outperforms existing GRPO variants by 5% on Qwen3 models, and sets new state-of-the-art results on multi-domain alignment benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addresses imbalance in multi-domain RLHF training data
Improves fairness and generalization for underrepresented domains
Enhances policy learning via domain- and difficulty-aware reward scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-aware reward scaling reweights optimization
Difficulty-aware reward scaling prioritizes uncertain prompts
Combined strategies enhance equitable policy learning
🔎 Similar Papers
No similar papers found.