đ¤ AI Summary
In ultrasound-guided robotic spinal surgery, safety-critical control failures arise from estimation errors in high-dimensional visual perceptionâsuch as semantic segmentation and image registrationâunder unknown data distributions. To address this, we propose a robust control framework grounded in bounded-mean sub-Gaussian noise modeling. Our method integrates robust set-theoretic control with variance surrogate propagation, enabling the first verifiable closed-loop safety guarantees against complex, perception-induced uncertainties. We further unify deep learningâbased perception, optimization-based motion planning, and sub-Gaussian model predictive control (MPC). The framework is rigorously evaluated in a high-fidelity simulator incorporating realistic human anatomy and respiratory motion. Experiments demonstrate strict preservation of system stability and surgical trajectory accuracy under severe perception noise, significantly enhancing control reliability in safety-critical scenarios.
đ Abstract
Safety-critical control using high-dimensional sensory feedback from optical data (e.g., images, point clouds) poses significant challenges in domains like autonomous driving and robotic surgery. Control can rely on low-dimensional states estimated from high-dimensional data. However, the estimation errors often follow complex, unknown distributions that standard probabilistic models fail to capture, making formal safety guarantees challenging. In this work, we introduce a novel characterization of these general estimation errors using sub-Gaussian noise with bounded mean. We develop a new technique for uncertainty propagation of proposed noise characterization in linear systems, which combines robust set-based methods with the propagation of sub-Gaussian variance proxies. We further develop a Model Predictive Control (MPC) framework that provides closed-loop safety guarantees for linear systems under the proposed noise assumption. We apply this MPC approach in an ultrasound-image-guided robotic spinal surgery pipeline, which contains deep-learning-based semantic segmentation, image-based registration, high-level optimization-based planning, and low-level robotic control. To validate the pipeline, we developed a realistic simulation environment integrating real human anatomy, robot dynamics, efficient ultrasound simulation, as well as in-vivo data of breathing motion and drilling force. Evaluation results in simulation demonstrate the potential of our approach for solving complex image-guided robotic surgery task while ensuring safety.