ReDit: Reward Dithering for Improved LLM Policy Optimization

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete reward signals often cause gradient anomalies, optimization instability, and slow convergence. To address this, we propose Reward Dithering (ReDit), the first method to inject controlled stochastic noise into discrete rewards, yielding continuous, differentiable pseudo-rewards. This facilitates exploration in flat regions, mitigates vanishing gradients, and helps escape local optima. ReDit integrates seamlessly with the policy gradient algorithm GRPO without altering model architecture or training paradigms. Experiments show that ReDit achieves baseline GRPO performance in only ~10% of the training steps and improves average task performance by 4% under identical training budgets. Visualization and theoretical analysis confirm its substantial benefits for gradient flow and convergence stability. The core innovation lies in a lightweight, reward-level perturbation mechanism that enables efficient and robust policy optimization for large language models.

Technology Category

Application Category

📝 Abstract
DeepSeek-R1 has successfully enhanced Large Language Model (LLM) reasoning capabilities through its rule-based reward system. While it's a ''perfect'' reward system that effectively mitigates reward hacking, such reward functions are often discrete. Our experimental observations suggest that discrete rewards can lead to gradient anomaly, unstable optimization, and slow convergence. To address this issue, we propose ReDit (Reward Dithering), a method that dithers the discrete reward signal by adding simple random noise. With this perturbed reward, exploratory gradients are continuously provided throughout the learning process, enabling smoother gradient updates and accelerating convergence. The injected noise also introduces stochasticity into flat reward regions, encouraging the model to explore novel policies and escape local optima. Experiments across diverse tasks demonstrate the effectiveness and efficiency of ReDit. On average, ReDit achieves performance comparable to vanilla GRPO with only approximately 10% the training steps, and furthermore, still exhibits a 4% performance improvement over vanilla GRPO when trained for a similar duration. Visualizations confirm significant mitigation of gradient issues with ReDit. Moreover, theoretical analyses are provided to further validate these advantages.
Problem

Research questions and friction points this paper is trying to address.

Discrete rewards cause gradient anomalies and slow convergence
ReDit adds noise to discrete rewards for smoother optimization
ReDit improves training efficiency and mitigates gradient issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dithers discrete rewards with random noise
Provides continuous exploratory gradients
Introduces stochasticity in flat reward regions
🔎 Similar Papers
2024-06-27Conference on Empirical Methods in Natural Language ProcessingCitations: 1
Chenxing Wei
Chenxing Wei
Shenzhen University
nlp
Jiarui Yu
Jiarui Yu
USTC
MultimodalComputer Vision
Y
Ying Tiffany He
College of Computer Science and Software Engineering, Shenzhen University, China
Hande Dong
Hande Dong
Tencent
machine learningdata miningNLP
Y
Yao Shu
Hong Kong University of Science and Technology (Guangzhou), China
F
Fei Yu
School of Information Technology, Carleton University, Canada