π€ AI Summary
This work investigates the reasoning capabilities of large language models (LLMs) in proving optimality approximation ratios for robotic path planning algorithmsβa task demanding sophisticated geometric intuition and multi-step formal deduction. To this end, we introduce the first benchmark dataset tailored to research-level approximation ratio proofs, comprising 34 challenging proof tasks, and conduct a systematic evaluation of leading open- and closed-source LLMs. By incorporating a context-augmentation strategy that injects task-specific lemmas, we substantially improve proof correctness, outperforming generic chain-of-thought or post-hoc prompting approaches. Our analysis reveals inherent limitations of current LLMs in formal reasoning under complex constraints, provides a fine-grained taxonomy of common logical errors, and proposes targeted mitigation strategies.
π Abstract
Robotic path planning problems are often NP-hard, and practical solutions typically rely on approximation algorithms with provable performance guarantees for general cases. While designing such algorithms is challenging, formally proving their approximation optimality is even more demanding, which requires domain-specific geometric insights and multi-step mathematical reasoning over complex operational constraints. Recent Large Language Models (LLMs) have demonstrated strong performance on mathematical reasoning benchmarks, yet their ability to assist with research-level optimality proofs in robotic path planning remains under-explored. In this work, we introduce the first benchmark for evaluating LLMs on approximation-ratio proofs of robotic path planning algorithms. The benchmark consists of 34 research-grade proof tasks spanning diverse planning problem types and complexity levels, each requiring structured reasoning over algorithm descriptions, problem constraints, and theoretical guarantees. Our evaluation of state-of-the-art proprietary and open-source LLMs reveals that even the strongest models struggle to produce fully valid proofs without external domain knowledge. However, providing LLMs with task-specific in-context lemmas substantially improves reasoning quality, a factor that is more effective than generic chain-of-thought prompting or supplying the ground-truth approximation ratio as posterior knowledge. We further provide fine-grained error analysis to characterize common logical failures and hallucinations, and demonstrate how each error type can be mitigated through targeted context augmentation.