AdaManip: Adaptive Articulated Object Manipulation Environments and Policy Learning

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of adaptive manipulation of complex articulated objects—such as safes and knob locks—by robots operating under unobservable implicit internal states (e.g., lock engagement, hinge constraints). We propose the first adaptive manipulation framework specifically designed for implicitly state-structured articulated objects. Methodologically: (1) we construct a simulation environment encompassing nine categories of objects with diverse implicit mechanisms; (2) we develop a task-driven adaptive demonstration collection strategy; and (3) we design an end-to-end imitation learning paradigm grounded in a 3D vision diffusion model, enhanced by real–sim co-training. Our key contribution is the first systematic modeling of trial-and-error manipulation dynamics under unobservable states, enabling significantly improved success rates and cross-object generalization on tasks such as cabinet opening and lock disengagement. Extensive evaluation validates effectiveness in both simulation and on real robotic platforms.

Technology Category

Application Category

📝 Abstract
Articulated object manipulation is a critical capability for robots to perform various tasks in real-world scenarios. Composed of multiple parts connected by joints, articulated objects are endowed with diverse functional mechanisms through complex relative motions. For example, a safe consists of a door, a handle, and a lock, where the door can only be opened when the latch is unlocked. The internal structure, such as the state of a lock or joint angle constraints, cannot be directly observed from visual observation. Consequently, successful manipulation of these objects requires adaptive adjustment based on trial and error rather than a one-time visual inference. However, previous datasets and simulation environments for articulated objects have primarily focused on simple manipulation mechanisms where the complete manipulation process can be inferred from the object's appearance. To enhance the diversity and complexity of adaptive manipulation mechanisms, we build a novel articulated object manipulation environment and equip it with 9 categories of objects. Based on the environment and objects, we further propose an adaptive demonstration collection and 3D visual diffusion-based imitation learning pipeline that learns the adaptive manipulation policy. The effectiveness of our designs and proposed method is validated through both simulation and real-world experiments. Our project page is available at: https://adamanip.github.io
Problem

Research questions and friction points this paper is trying to address.

Enhancing robot manipulation of articulated objects
Developing adaptive learning for complex object mechanisms
Creating diverse simulation environments for manipulation training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive manipulation environment
3D visual diffusion learning
Diverse articulated object categories
🔎 Similar Papers
No similar papers found.
Yuanfei Wang
Yuanfei Wang
Peking University
robot learningreinforcement learning
Xiaojie Zhang
Xiaojie Zhang
Beijing University of Posts and Telecommunications
Ruihai Wu
Ruihai Wu
Peking University
computer visionrobotics
Y
Yu Li
Beijing University of Posts and Telecommunications
Y
Yan Shen
Center on Frontiers of Computing Studies, School of Computer Science, Peking University
Mingdong Wu
Mingdong Wu
Peking University
Embodied AIReinforcement LearningGenerative Model
Z
Zhaofeng He
Beijing University of Posts and Telecommunications
Y
Yizhou Wang
Center on Frontiers of Computing Studies, School of Computer Science, Peking University; Inst. for Artificial Intelligence, Peking University; Nat’l Eng. Research Center of Visual Technology, Peking University; State Key Laboratory of General Artificial Intelligence, Peking University
H
Hao Dong
Center on Frontiers of Computing Studies, School of Computer Science, Peking University