RoboHanger: Learning Generalizable Robotic Hanger Insertion for Diverse Garments

📅 2024-12-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging robotic task of hanging unknown, folded garments on hangers—a problem characterized by long-horizon dependencies, high fabric deformability, and scarcity of real-world training data. We propose a hierarchical strategy framework that decomposes the task into three sequential sub-policies: perception, hanger localization, and garment insertion. To enhance robustness to fabric deformation, we design an input representation combining single-view depth maps with binary segmentation masks. Furthermore, we construct a physics-based simulation environment containing 144 synthetically generated clothing items, coupled with a low-dimensional, geometry-aware action space and geometrically grounded Sim2Real transfer. Our method enables efficient training in simulation and achieves a 75% successful hanging rate on eight unseen real-world garments—significantly outperforming end-to-end baselines. It demonstrates strong generalization across diverse fabric materials and garment styles.

Technology Category

Application Category

📝 Abstract
For the task of hanging clothes, learning how to insert a hanger into a garment is a crucial step, but has rarely been explored in robotics. In this work, we address the problem of inserting a hanger into various unseen garments that are initially laid flat on a table. This task is challenging due to its long-horizon nature, the high degrees of freedom of the garments and the lack of data. To simplify the learning process, we first propose breaking the task into several subtasks. Then, we formulate each subtask as a policy learning problem and propose a low-dimensional action parameterization. To overcome the challenge of limited data, we build our own simulator and create 144 synthetic clothing assets to effectively collect high-quality training data. Our approach uses single-view depth images and object masks as input, which mitigates the Sim2Real appearance gap and achieves high generalization capabilities for new garments. Extensive experiments in both simulation and the real world validate our proposed method. By training on various garments in the simulator, our method achieves a 75% success rate with 8 different unseen garments in the real world.
Problem

Research questions and friction points this paper is trying to address.

Learning robotic hanger insertion for diverse garments.
Addressing challenges of long-horizon tasks and garment variability.
Overcoming data scarcity with synthetic data and simulation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Breaks task into subtasks for learning
Uses low-dimensional action parameterization
Simulator with synthetic data for training
🔎 Similar Papers
No similar papers found.
Y
Yuxing Chen
CFCS, School of Computer Science, Peking University; Galbot
Songlin Wei
Songlin Wei
University of Southern California, (Previously) Peking University
Robotics3D Vision
B
Bowen Xiao
CFCS, School of Computer Science, Peking University; Galbot
J
Jiangran Lyu
CFCS, School of Computer Science, Peking University; Galbot
J
Jiayi Chen
CFCS, School of Computer Science, Peking University; Galbot
F
Feng Zhu
Galbot
H
He Wang
CFCS, School of Computer Science, Peking University; Galbot; Beijing Academy of Artificial Intelligence