🤖 AI Summary
This study investigates the performance boundaries between imitation learning (IL) and reinforcement learning (RL) in surgical action planning. Using the CholecT50 dataset, we conduct the first systematic comparison of dual-task autoregressive IL (DARIL) against three RL variants—world-model RL, direct video RL, and inverse RL—for predicting future instrument–action–target triplets. Results show DARIL achieves 34.6% mean average precision (mAP) in triplet recognition (29.2% for 10-second horizons), significantly outperforming all RL methods. We identify expert behavior distribution matching as the key driver of IL’s superiority, challenging the prevailing assumption that RL inherently excels in sequential decision-making. Our core contributions are: (i) establishing IL’s paradigmatic primacy for real-time surgical planning; and (ii) introducing a new clinical-oriented benchmark for interpretable, low-latency action prediction—emphasizing transparency, efficiency, and clinical deployability.
📝 Abstract
Surgical action planning requires predicting future instrument-verb-target triplets for real-time assistance. While teleoperated robotic surgery provides natural expert demonstrations for imitation learning (IL), reinforcement learning (RL) could potentially discover superior strategies through exploration. We present the first comprehensive comparison of IL versus RL for surgical action planning on CholecT50. Our Dual-task Autoregressive Imitation Learning (DARIL) baseline achieves 34.6% action triplet recognition mAP and 33.6% next frame prediction mAP with smooth planning degradation to 29.2% at 10-second horizons. We evaluated three RL variants: world model-based RL, direct video RL, and inverse RL enhancement. Surprisingly, all RL approaches underperformed DARIL i.e. world model RL dropped to 3.1% mAP at 10s while direct video RL achieved only 15.9%. Our analysis reveals that distribution matching on expert-annotated test sets systematically favors IL over potentially valid RL policies that differ from training demonstrations. This challenges assumptions about RL superiority in sequential decision making and provides crucial insights for surgical AI development.