Adaptive Articulated Object Manipulation On The Fly with Foundation Model Reasoning and Part Grounding

πŸ“… 2025-07-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Robotic manipulation of articulated objects faces two core challenges: (1) perceptual ambiguity due to occluded or unobservable component geometries, and (2) the lack of generalizable control policies stemming from diverse functional mechanisms across object categories. To address these, we propose AdaRPGβ€”a novel framework that pioneers the use of foundation models for part-level functional grounding and mechanism reasoning. We introduce the first cross-category dataset of articulated-object part functional annotations. Leveraging foundation-model-driven part segmentation, functional identification, and mechanism understanding, AdaRPG jointly integrates visual perception with high-level control code generation to enable real-time, adaptive manipulation policy synthesis. Extensive experiments in both simulation and real-world settings demonstrate strong zero-shot generalization to unseen articulated object categories and significantly improved success rates on complex mechanisms.

Technology Category

Application Category

πŸ“ Abstract
Articulated objects pose diverse manipulation challenges for robots. Since their internal structures are not directly observable, robots must adaptively explore and refine actions to generate successful manipulation trajectories. While existing works have attempted cross-category generalization in adaptive articulated object manipulation, two major challenges persist: (1) the geometric diversity of real-world articulated objects complicates visual perception and understanding, and (2) variations in object functions and mechanisms hinder the development of a unified adaptive manipulation strategy. To address these challenges, we propose AdaRPG, a novel framework that leverages foundation models to extract object parts, which exhibit greater local geometric similarity than entire objects, thereby enhancing visual affordance generalization for functional primitive skills. To support this, we construct a part-level affordance annotation dataset to train the affordance model. Additionally, AdaRPG utilizes the common knowledge embedded in foundation models to reason about complex mechanisms and generate high-level control codes that invoke primitive skill functions based on part affordance inference. Simulation and real-world experiments demonstrate AdaRPG's strong generalization ability across novel articulated object categories.
Problem

Research questions and friction points this paper is trying to address.

Adapting robot manipulation for diverse articulated objects
Overcoming geometric diversity in visual perception
Developing unified strategy for varied object mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages foundation models for part extraction
Uses part-level affordance annotation dataset
Generates control codes via foundation model reasoning
πŸ”Ž Similar Papers
No similar papers found.
Xiaojie Zhang
Xiaojie Zhang
Beijing University of Posts and Telecommunications
Yuanfei Wang
Yuanfei Wang
Peking University
robot learningreinforcement learning
Ruihai Wu
Ruihai Wu
Peking University
computer visionrobotics
K
Kunqi Xu
School of EECS, Peking University
Y
Yu Li
Beijing University of Posts and Telecommunications
Liuyu Xiang
Liuyu Xiang
Beijing University of Posts and Telecommunications
Computer VisionReinforcement LearningLLM Agent
H
Hao Dong
School of Computer Science, Peking University
Z
Zhaofeng He
Beijing University of Posts and Telecommunications