Subtle Risks, Critical Failures: A Framework for Diagnosing Physical Safety of LLMs for Embodied Decision Making

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-based embodied agents are evaluated for safety using coarse-grained success rates, which fail to localize latent physical risks—hindering deployment in high-stakes scenarios. This work proposes SAFEL, a novel framework introducing *decomposable embodied planning safety evaluation*: it enables contextual root-cause attribution via instruction refusal testing and fine-grained plan safety analysis (e.g., goal understanding, state transition fidelity, and action sequence validity). We introduce EMBODYGUARD, a PDDL-supported benchmark comprising 942 high-risk physical scenarios. SAFEL integrates formal modeling, modular assessment, instruction refusal classification, and joint feasibility–hazardness discrimination. Evaluations across 13 state-of-the-art models reveal that while agents consistently refuse overtly dangerous instructions, they exhibit severe deficiencies in anticipating and mitigating subtle, causally indirect physical risks—highlighting a critical gap in current safety capabilities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used for decision making in embodied agents, yet existing safety evaluations often rely on coarse success rates and domain-specific setups, making it difficult to diagnose why and where these models fail. This obscures our understanding of embodied safety and limits the selective deployment of LLMs in high-risk physical environments. We introduce SAFEL, the framework for systematically evaluating the physical safety of LLMs in embodied decision making. SAFEL assesses two key competencies: (1) rejecting unsafe commands via the Command Refusal Test, and (2) generating safe and executable plans via the Plan Safety Test. Critically, the latter is decomposed into functional modules, goal interpretation, transition modeling, action sequencing, enabling fine-grained diagnosis of safety failures. To support this framework, we introduce EMBODYGUARD, a PDDL-grounded benchmark containing 942 LLM-generated scenarios covering both overtly malicious and contextually hazardous instructions. Evaluation across 13 state-of-the-art LLMs reveals that while models often reject clearly unsafe commands, they struggle to anticipate and mitigate subtle, situational risks. Our results highlight critical limitations in current LLMs and provide a foundation for more targeted, modular improvements in safe embodied reasoning.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing why and where LLMs fail in embodied decision making
Evaluating LLMs' ability to reject unsafe commands and generate safe plans
Assessing LLMs' limitations in anticipating subtle situational risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAFEL framework evaluates LLM physical safety
EMBODYGUARD benchmark tests 942 hazardous scenarios
Modular diagnosis for LLM safety failures
🔎 Similar Papers
No similar papers found.