From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited success rate of pretrained general-purpose robotic policies in complex, long-horizon manipulation tasks by introducing DICE-RL, a novel framework that conceptualizes reinforcement learning as a distributional contraction operator. DICE-RL enables online refinement of diffusion- or flow-based pretrained policies through residual-style fine-tuning, integrating selective behavioral regularization with value-guided action selection. Operating solely on pixel inputs, the method achieves stable and sample-efficient policy optimization. Experimental results demonstrate that DICE-RL substantially improves task success rates across both simulated and real-world robotic platforms, while maintaining high training stability and sample efficiency.

Technology Category

Application Category

📝 Abstract
We introduce Distribution Contractive Reinforcement Learning (DICE-RL), a framework that uses reinforcement learning (RL) as a"distribution contraction"operator to refine pretrained generative robot policies. DICE-RL turns a pretrained behavior prior into a high-performing"pro"policy by amplifying high-success behaviors from online feedback. We pretrain a diffusion- or flow-based policy for broad behavioral coverage, then finetune it with a stable, sample-efficient residual off-policy RL framework that combines selective behavior regularization with value-guided action selection. Extensive experiments and analyses show that DICE-RL reliably improves performance with strong stability and sample efficiency. It enables mastery of complex long-horizon manipulation skills directly from high-dimensional pixel inputs, both in simulation and on a real robot. Project website: https://zhanyisun.github.io/dice.rl.2026/.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
policy finetuning
behavior prior
long-horizon manipulation
sample efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distribution Contractive RL
Behavior Prior Finetuning
Sample-Efficient Reinforcement Learning
Value-Guided Action Selection
Robot Skill Mastery