🤖 AI Summary
This work addresses the performance bottlenecks in surgical robot imitation learning caused by data scarcity, constrained operational spaces, and stringent safety requirements. We propose a supervised Mixture-of-Experts (MoE) architecture that can be seamlessly integrated atop any autonomous policy, specifically tailored for phase-based surgical tasks. Relying solely on stereo endoscopic images and fewer than 150 expert demonstrations, our approach leverages the Action Chunking Transformer (ACT) to enable end-to-end policy learning. To our knowledge, this is the first application of supervised MoE in surgical imitation learning, achieving substantial performance gains under extremely limited data and enabling zero-shot transfer to unseen camera views and both ex vivo and in vivo tissues. In intestinal grasping and retraction tasks, our method significantly outperforms standard ACT and state-of-the-art vision-language-action models in success rate and out-of-distribution robustness, with preliminary feasibility demonstrated in live porcine surgery.
📝 Abstract
Imitation learning has achieved remarkable success in robotic manipulation, yet its application to surgical robotics remains challenging due to data scarcity, constrained workspaces, and the need for an exceptional level of safety and predictability. We present a supervised Mixture-of-Experts (MoE) architecture designed for phase-structured surgical manipulation tasks, which can be added on top of any autonomous policy. Unlike prior surgical robot learning approaches that rely on multi-camera setups or thousands of demonstrations, we show that a lightweight action decoder policy like Action Chunking Transformer (ACT) can learn complex, long-horizon manipulation from less than 150 demonstrations using solely stereo endoscopic images, when equipped with our architecture. We evaluate our approach on the collaborative surgical task of bowel grasping and retraction, where a robot assistant interprets visual cues from a human surgeon, executes targeted grasping on deformable tissue, and performs sustained retraction. We benchmark our method against state-of-the-art Vision-Language-Action (VLA) models and the standard ACT baseline. Our results show that generalist VLAs fail to acquire the task entirely, even under standard in-distribution conditions. Furthermore, while standard ACT achieves moderate success in-distribution, adopting a supervised MoE architecture significantly boosts its performance, yielding higher success rates in-distribution and demonstrating superior robustness in out-of-distribution scenarios, including novel grasp locations, reduced illumination, and partial occlusions. Notably, it generalizes to unseen testing viewpoints and also transfers zero-shot to ex vivo porcine tissue without additional training, offering a promising pathway toward in vivo deployment. To support this, we present qualitative preliminary results of policy roll-outs during in vivo porcine surgery.