🤖 AI Summary
This work addresses the limited generalization capability of dual-arm robots when manipulating 3D objects from novel categories, a challenge primarily caused by strong data dependency. To overcome this, the authors propose a cross-category functional mapping approach that integrates vision foundation models with semantic correspondence. By fine-tuning on only a few examples of a new object category, the method enables zero-shot generalization for bimanual manipulation of previously unseen objects. This is the first study to combine vision foundation models and semantic correspondence for coordinated dual-arm control. Extensive experiments in both simulation and real-world environments demonstrate high task success rates, significantly reduced data requirements, and improved operational efficiency and generalization performance on novel object categories.
📝 Abstract
Bimanual manipulation is imperative yet challenging for robots to execute complex tasks, requiring coordinated collaboration between two arms. However, existing methods for bimanual manipulation often rely on costly data collection and training, struggling to generalize to unseen objects in novel categories efficiently. In this paper, we present Bi-Adapt, a novel framework designed for efficient generalization for bimanual manipulation via semantic correspondence. Bi-Adapt achieves cross-category affordance mapping by leveraging the strong capability of vision foundation models. Fine-tuning with restricted data on novel categories, Bi-Adapt exhibits notable generalization to out-of-category objects in a zero-shot manner. Extensive experiments conducted in both simulation and real-world environments validate the effectiveness of our approach and demonstrate its high efficiency, achieving a high success rate on different benchmark tasks across novel categories with limited data. Project website: https://biadapt-project.github.io/