🤖 AI Summary
Weak generalization across diverse scenes and object poses, coupled with high demonstration requirements, hinders robotic dexterous manipulation. This paper proposes an object-centric hierarchical policy framework featuring a novel end-effector trajectory consistency–based object focusing mechanism, enabling strong generalization from only ten human demonstrations. The method integrates a three-stage vision–action co-design: (1) object perception and 6D pose estimation; (2) pre-manipulation pose reaching planning; and (3) a lightweight Object-Focus Actor policy network. Evaluated on seven real-world dexterous manipulation tasks, the approach significantly improves both positional and background generalization, achieving robust and transferable manipulation performance with minimal demonstration cost.
📝 Abstract
Robot manipulation learning from human demonstrations offers a rapid means to acquire skills but often lacks generalization across diverse scenes and object placements. This limitation hinders real-world applications, particularly in complex tasks requiring dexterous manipulation. Vision-Language-Action (VLA) paradigm leverages large-scale data to enhance generalization. However, due to data scarcity, VLA's performance remains limited. In this work, we introduce Object-Focus Actor (OFA), a novel, data-efficient approach for generalized dexterous manipulation. OFA exploits the consistent end trajectories observed in dexterous manipulation tasks, allowing for efficient policy training. Our method employs a hierarchical pipeline: object perception and pose estimation, pre-manipulation pose arrival and OFA policy execution. This process ensures that the manipulation is focused and efficient, even in varied backgrounds and positional layout. Comprehensive real-world experiments across seven tasks demonstrate that OFA significantly outperforms baseline methods in both positional and background generalization tests. Notably, OFA achieves robust performance with only 10 demonstrations, highlighting its data efficiency.