Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address occlusion and limited field-of-view caused by fixed cameras in imitation learning, this paper proposes an Active Vision (AV) framework for bimanual robotic manipulation. Our method integrates a 7-DoF camera arm with VR-based immersive bimanual teleoperation to enable task-driven, real-time viewpoint planning and environment interaction. We introduce human-guided active vision—deeply embedded within a bimanual teleoperation system—for the first time, establishing the AV-ALOHA architecture that jointly models viewpoint selection and manipulation policy learning. Evaluated on the bimanual ALOHA 2 platform, our approach leverages stereo vision, VR rendering, and sim-to-real co-training. Across multiple low-visibility manipulation tasks, it achieves a 42% average success rate improvement over fixed-camera baselines, demonstrating the critical benefit of task-aware active viewpoint selection for complex bimanual operations.

Technology Category

Application Category

📝 Abstract
Imitation learning has demonstrated significant potential in performing high-precision manipulation tasks using visual feedback. However, it is common practice in imitation learning for cameras to be fixed in place, resulting in issues like occlusion and limited field of view. Furthermore, cameras are often placed in broad, general locations, without an effective viewpoint specific to the robot's task. In this work, we investigate the utility of active vision (AV) for imitation learning and manipulation, in which, in addition to the manipulation policy, the robot learns an AV policy from human demonstrations to dynamically change the robot's camera viewpoint to obtain better information about its environment and the given task. We introduce AV-ALOHA, a new bimanual teleoperation robot system with AV, an extension of the ALOHA 2 robot system, incorporating an additional 7-DoF robot arm that only carries a stereo camera and is solely tasked with finding the best viewpoint. This camera streams stereo video to an operator wearing a virtual reality (VR) headset, allowing the operator to control the camera pose using head and body movements. The system provides an immersive teleoperation experience, with bimanual first-person control, enabling the operator to dynamically explore and search the scene and simultaneously interact with the environment. We conduct imitation learning experiments of our system both in real-world and in simulation, across a variety of tasks that emphasize viewpoint planning. Our results demonstrate the effectiveness of human-guided AV for imitation learning, showing significant improvements over fixed cameras in tasks with limited visibility. Project website: https://soltanilara.github.io/av-aloha/
Problem

Research questions and friction points this paper is trying to address.

Explores active vision in bimanual robotic manipulation tasks.
Addresses occlusion and limited field of view in imitation learning.
Introduces AV-ALOHA system for dynamic camera viewpoint adjustment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active vision policy for dynamic camera viewpoint
AV-ALOHA system with 7-DoF camera arm
Immersive VR teleoperation for bimanual control
🔎 Similar Papers
No similar papers found.