🤖 AI Summary
In household settings, articulated object manipulation suffers from depth perception failure caused by transparent or reflective materials and exhibits poor generalization across part-level interactions. To address these challenges, we introduce the first large-scale, material-agnostic articulated object manipulation dataset with fine-grained, part-level annotations—including photorealistic material randomization and scene-level executable interaction pose labels. We propose a part-centric, material-invariant data paradigm and a modular neural framework that jointly integrates physics-based rendering synthesis, part-level semantic and kinematic modeling, depth estimation, and interaction pose optimization. Experiments demonstrate that our method improves depth estimation accuracy by 18.3% and executable pose prediction accuracy by 22.7% over state-of-the-art methods in both simulation and real-world scenarios. It further exhibits strong cross-material and cross-form generalization, as well as robustness to challenging optical properties.
📝 Abstract
Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios. More information and demos can be found at: https://pku-epic.github.io/GAPartManip/.