🤖 AI Summary
This work addresses the challenge of modeling users’ personalized and often implicit preferences in deformable object manipulation by proposing RKO, a method that efficiently aligns pretrained visuomotor diffusion policies with user preferences using only a few preference demonstrations. RKO integrates the strengths of Reward-regularized Policy Optimization (RPO) and Kahneman-Tversky Optimization (KTO) to enable structured preference learning, outperforming existing fine-tuning and preference alignment approaches in both sample efficiency and task performance. Experiments on real-world cloth folding tasks—spanning diverse garment types and preference settings—demonstrate that policies aligned via RKO achieve significantly higher task satisfaction and personalization. These results validate the effectiveness and feasibility of structured preference learning for complex manipulation of deformable objects.
📝 Abstract
Humans naturally develop preferences for how manipulation tasks should be performed, which are often subtle, personal, and difficult to articulate. Although it is important for robots to account for these preferences to increase personalization and user satisfaction, they remain largely underexplored in robotic manipulation, particularly in the context of deformable objects like garments and fabrics. In this work, we study how to adapt pretrained visuomotor diffusion policies to reflect preferred behaviors using limited demonstrations. We introduce RKO, a novel preference-alignment method that combines the benefits of two recent frameworks: RPO and KTO. We evaluate RKO against common preference learning frameworks, including these two, as well as a baseline vanilla diffusion policy, on real-world cloth-folding tasks spanning multiple garments and preference settings. We show that preference-aligned policies (particularly RKO) achieve superior performance and sample efficiency compared to standard diffusion policy fine-tuning. These results highlight the importance and feasibility of structured preference learning for scaling personalized robot behavior in complex deformable object manipulation tasks.