🤖 AI Summary
This work addresses safety risks arising from natural language instructions and generated plans in LLM-driven robotic task planning. We propose a multi-stage safety verification framework that uniquely integrates formal logic—encompassing invariants, preconditions, and postconditions—with chain-of-thought (CoT) reasoning to enable dual-check validation. Our method performs robustness checking of instructions, executability verification of plans, and safety-aware task allocation filtering. The framework comprises prompt soundness analysis, dynamic precondition/postcondition validation, and hierarchical safety gating. Experimental results demonstrate a 90.5% reduction in the acceptance rate of harmful instructions while preserving high approval rates for legitimate tasks. This significantly enhances system robustness, interpretability, and trustworthiness. By establishing a verifiable safety assurance paradigm, our approach advances the deployment of LLM-powered autonomous robots in safety-critical environments.
📝 Abstract
Robotics researchers increasingly leverage large language models (LLM) in robotics systems, using them as interfaces to receive task commands, generate task plans, form team coalitions, and allocate tasks among multi-robot and human agents. However, despite their benefits, the growing adoption of LLM in robotics has raised several safety concerns, particularly regarding executing malicious or unsafe natural language prompts. In addition, ensuring that task plans, team formation, and task allocation outputs from LLMs are adequately examined, refined, or rejected is crucial for maintaining system integrity. In this paper, we introduce SafePlan, a multi-component framework that combines formal logic and chain-of-thought reasoners for enhancing the safety of LLM-based robotics systems. Using the components of SafePlan, including Prompt Sanity COT Reasoner and Invariant, Precondition, and Postcondition COT reasoners, we examined the safety of natural language task prompts, task plans, and task allocation outputs generated by LLM-based robotic systems as means of investigating and enhancing system safety profile. Our results show that SafePlan outperforms baseline models by leading to 90.5% reduction in harmful task prompt acceptance while still maintaining reasonable acceptance of safe tasks.