🤖 AI Summary
Current evaluation methods struggle to reliably assess the safety and correctness of vision-language models in surgical planning, particularly due to deficiencies in long-horizon reasoning and integration of procedural knowledge. This work proposes an expert rule–driven “phase-goal satisfiability” criterion as a high-precision meta-evaluation standard and constructs a multicenter surgical planning benchmark to systematically evaluate video large language models under diverse constraints. The study demonstrates that injecting structured knowledge significantly enhances model performance, whereas purely semantic guidance proves unreliable. It further reveals pervasive issues of perceptual errors and insufficient reasoning constraints in existing models. Critically, the work uncovers systematic biases in conventional sequence similarity metrics when distinguishing valid from invalid surgical plans, thereby establishing a new evaluation paradigm for safety-critical applications.
📝 Abstract
Surgical planning integrates visual perception, long-horizon reasoning, and procedural knowledge, yet it remains unclear whether current evaluation protocols reliably assess vision-language models (VLMs) in safety-critical settings. Motivated by a goal-oriented view of surgical planning, we define planning correctness via phase-goal satisfiability, where plan validity is determined by expert-defined surgical rules. Based on this definition, we introduce a multicentric meta-evaluation benchmark with valid procedural variations and invalid plans containing order and content errors. Using this benchmark, we show that sequence similarity metrics systematically misjudge planning quality, penalizing valid plans while failing to identify invalid ones. We therefore adopt a rule-based goal-satisfiability metric as a high-precision meta-evaluation reference to assess Video-LLMs under progressively constrained settings, revealing failures due to perception errors and under-constrained reasoning. Structural knowledge consistently improves performance, whereas semantic guidance alone is unreliable and benefits larger models only when combined with structural constraints.