SonoGym: High Performance Simulation for Challenging Surgical Tasks with Robotic Ultrasound

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL) and imitation learning (IL) approaches for complex surgical tasks—such as anatomical reconstruction and navigation—are hindered by the absence of high-fidelity, scalable simulation environments. To address this, we introduce the first physics-enhanced simulation platform specifically designed for robotic ultrasound surgery, integrating CT-derived 3D anatomical models with a dual-path ultrasound image synthesis method combining physics-based modeling and generative modeling. The platform enables real-time ultrasound simulation and autonomous navigation training in challenging domains such as orthopedics. We propose a modularized DRL framework supporting parallel simulation and history-dependent reward modeling, and incorporate vision Transformers with diffusion-based policies for efficient IL. Experiments demonstrate successful acquisition of robust, clinically relevant policies across multiple surgical tasks. Our work also reveals critical limitations in existing methods concerning generalization capability and reward function design.

Technology Category

Application Category

📝 Abstract
Ultrasound (US) is a widely used medical imaging modality due to its real-time capabilities, non-invasive nature, and cost-effectiveness. Robotic ultrasound can further enhance its utility by reducing operator dependence and improving access to complex anatomical regions. For this, while deep reinforcement learning (DRL) and imitation learning (IL) have shown potential for autonomous navigation, their use in complex surgical tasks such as anatomy reconstruction and surgical guidance remains limited -- largely due to the lack of realistic and efficient simulation environments tailored to these tasks. We introduce SonoGym, a scalable simulation platform for complex robotic ultrasound tasks that enables parallel simulation across tens to hundreds of environments. Our framework supports realistic and real-time simulation of US data from CT-derived 3D models of the anatomy through both a physics-based and a generative modeling approach. Sonogym enables the training of DRL and recent IL agents (vision transformers and diffusion policies) for relevant tasks in robotic orthopedic surgery by integrating common robotic platforms and orthopedic end effectors. We further incorporate submodular DRL -- a recent method that handles history-dependent rewards -- for anatomy reconstruction and safe reinforcement learning for surgery. Our results demonstrate successful policy learning across a range of scenarios, while also highlighting the limitations of current methods in clinically relevant environments. We believe our simulation can facilitate research in robot learning approaches for such challenging robotic surgery applications. Dataset, codes, and videos are publicly available at https://sonogym.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Lacks realistic simulation for robotic ultrasound surgical tasks
Limited use of DRL and IL in complex surgical guidance
Need for scalable platform to train autonomous surgical agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable simulation platform for robotic ultrasound
Physics-based and generative US data simulation
Submodular DRL for anatomy reconstruction
🔎 Similar Papers
No similar papers found.