π€ AI Summary
To address the challenge of safe navigation for robotic digestive endoscopes (RDEs) in narrow, unstructured gastrointestinal tracts, this paper proposes HI-PPOβa human-intervention-enhanced Proximal Policy Optimization framework. HI-PPO innovatively integrates real-time clinical expert interventions into the reinforcement learning (RL) training loop, synergistically combining an Enhanced Exploration Mechanism (EEM), Reward-Penalty Adaptation (RPA), and Behavior Cloning Similarity (BCS) constraints. This integration significantly improves policy safety and exploration efficiency. Experimental evaluation in a high-fidelity simulation environment demonstrates that HI-PPO achieves an average trajectory error of 8.02 mm and a safety score of 0.862βmatching expert-level performance and substantially outperforming conventional intervention-free RL baselines. The framework establishes a verifiable, clinically grounded paradigm for safe RDE navigation, advancing the translational readiness of robotic endoscopy.
π Abstract
With the increasing application of automated robotic digestive endoscopy (RDE), ensuring safe and efficient navigation in the unstructured and narrow digestive tract has become a critical challenge. Existing automated reinforcement learning navigation algorithms often result in potentially risky collisions due to the absence of essential human intervention, which significantly limits the safety and effectiveness of RDE in actual clinical practice. To address this limitation, we proposed a Human Intervention (HI)-based Proximal Policy Optimization (PPO) framework, dubbed HI-PPO, which incorporates expert knowledge to enhance RDE's safety. Specifically, HI-PPO combines Enhanced Exploration Mechanism (EEM), Reward-Penalty Adjustment (RPA), and Behavior Cloning Similarity (BCS) to address PPO's exploration inefficiencies for safe navigation in complex gastrointestinal environments. Comparative experiments were conducted on a simulation platform, and the results showed that HI-PPO achieved a mean ATE (Average Trajectory Error) of (8.02 ext{mm}) and a Security Score of (0.862), demonstrating performance comparable to human experts. The code will be publicly available once this paper is published.