SafePlan: Leveraging Formal Logic and Chain-of-Thought Reasoning for Enhanced Safety in LLM-based Robotic Task Planning

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses safety risks arising from natural language instructions and generated plans in LLM-driven robotic task planning. We propose a multi-stage safety verification framework that uniquely integrates formal logic—encompassing invariants, preconditions, and postconditions—with chain-of-thought (CoT) reasoning to enable dual-check validation. Our method performs robustness checking of instructions, executability verification of plans, and safety-aware task allocation filtering. The framework comprises prompt soundness analysis, dynamic precondition/postcondition validation, and hierarchical safety gating. Experimental results demonstrate a 90.5% reduction in the acceptance rate of harmful instructions while preserving high approval rates for legitimate tasks. This significantly enhances system robustness, interpretability, and trustworthiness. By establishing a verifiable safety assurance paradigm, our approach advances the deployment of LLM-powered autonomous robots in safety-critical environments.

Technology Category

Application Category

📝 Abstract
Robotics researchers increasingly leverage large language models (LLM) in robotics systems, using them as interfaces to receive task commands, generate task plans, form team coalitions, and allocate tasks among multi-robot and human agents. However, despite their benefits, the growing adoption of LLM in robotics has raised several safety concerns, particularly regarding executing malicious or unsafe natural language prompts. In addition, ensuring that task plans, team formation, and task allocation outputs from LLMs are adequately examined, refined, or rejected is crucial for maintaining system integrity. In this paper, we introduce SafePlan, a multi-component framework that combines formal logic and chain-of-thought reasoners for enhancing the safety of LLM-based robotics systems. Using the components of SafePlan, including Prompt Sanity COT Reasoner and Invariant, Precondition, and Postcondition COT reasoners, we examined the safety of natural language task prompts, task plans, and task allocation outputs generated by LLM-based robotic systems as means of investigating and enhancing system safety profile. Our results show that SafePlan outperforms baseline models by leading to 90.5% reduction in harmful task prompt acceptance while still maintaining reasonable acceptance of safe tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhances safety in LLM-based robotic task planning
Addresses safety concerns from malicious or unsafe prompts
Improves system integrity by refining or rejecting unsafe outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines formal logic with chain-of-thought reasoning
Uses Prompt Sanity and Condition COT Reasoners
Reduces harmful task prompt acceptance by 90.5%