🤖 AI Summary
To address the challenge of autonomous hair styling under fine-grained structural complexity and highly dynamic deformations, this paper proposes the first robot-based hairstyle shaping framework for open-world scenarios. Methodologically: (1) we introduce an action-conditioned latent-space state editing mechanism, integrating a compact, large-scale pre-trained 3D hairstyle latent space with a learned latent dynamics model; (2) leveraging our in-house hair physics simulator for synthetic data generation, we deploy an MPPI-based planner to enable vision-guided closed-loop control. Our contributions include the first high-generalization dynamical modeling of unseen hairstyles and zero-shot transfer capability. In simulation, our method reduces local deformation error by 22% and improves task success rate by 42%. On real synthetic wigs, it robustly achieves complex styling tasks, outperforming the current state-of-the-art system.
📝 Abstract
Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair. In this work, we present DYMO-Hair, a model-based robot hair care system. We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Using the dynamics model with a Model Predictive Path Integral (MPPI) planner, DYMO-Hair is able to perform visual goal-conditioned hair styling. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. Together, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. More details are available on our project page: https://chengyzhao.github.io/DYMOHair-web/.