Object-Focus Actor for Data-efficient Robot Generalization Dexterous Manipulation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak generalization across diverse scenes and object poses, coupled with high demonstration requirements, hinders robotic dexterous manipulation. This paper proposes an object-centric hierarchical policy framework featuring a novel end-effector trajectory consistency–based object focusing mechanism, enabling strong generalization from only ten human demonstrations. The method integrates a three-stage vision–action co-design: (1) object perception and 6D pose estimation; (2) pre-manipulation pose reaching planning; and (3) a lightweight Object-Focus Actor policy network. Evaluated on seven real-world dexterous manipulation tasks, the approach significantly improves both positional and background generalization, achieving robust and transferable manipulation performance with minimal demonstration cost.

Technology Category

Application Category

📝 Abstract
Robot manipulation learning from human demonstrations offers a rapid means to acquire skills but often lacks generalization across diverse scenes and object placements. This limitation hinders real-world applications, particularly in complex tasks requiring dexterous manipulation. Vision-Language-Action (VLA) paradigm leverages large-scale data to enhance generalization. However, due to data scarcity, VLA's performance remains limited. In this work, we introduce Object-Focus Actor (OFA), a novel, data-efficient approach for generalized dexterous manipulation. OFA exploits the consistent end trajectories observed in dexterous manipulation tasks, allowing for efficient policy training. Our method employs a hierarchical pipeline: object perception and pose estimation, pre-manipulation pose arrival and OFA policy execution. This process ensures that the manipulation is focused and efficient, even in varied backgrounds and positional layout. Comprehensive real-world experiments across seven tasks demonstrate that OFA significantly outperforms baseline methods in both positional and background generalization tests. Notably, OFA achieves robust performance with only 10 demonstrations, highlighting its data efficiency.
Problem

Research questions and friction points this paper is trying to address.

Lack of generalization in robot manipulation learning from human demonstrations
Data scarcity limits Vision-Language-Action (VLA) paradigm performance
Need for data-efficient dexterous manipulation across diverse scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-Focus Actor for efficient policy training
Hierarchical pipeline with object perception
Robust performance with minimal demonstrations
🔎 Similar Papers
2024-07-16Neural Information Processing SystemsCitations: 16
Y
Yihang Li
JD Explore Academy, JD Company
T
Tianle Zhang
JD Explore Academy, JD Company
X
Xuelong Wei
JD Explore Academy, JD Company
J
Jiayi Li
Beijing Jiaotong University
L
Lin Zhao
JD Explore Academy, JD Company
Dongchi Huang
Dongchi Huang
Beihang University
Reinforcement LearningEmbodied AIWorld Models
Zhirui Fang
Zhirui Fang
Master of Artificial Intelligence Tsinghua University
Embodied AIReinforcement Learning
M
Minhua Zheng
Beijing Jiaotong University
W
Wenjun Dai
JD Explore Academy, JD Company
X
Xiaodong He
JD Explore Academy, JD Company