Enhancing Reliability in LLM-Integrated Robotic Systems: A Unified Approach to Security and Safety

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient reliability of LLM-empowered robotic systems under adversarial attacks and in complex environments, this paper proposes the first unified framework integrating safety (i.e., operational integrity) and security (i.e., resilience against malicious inputs). The framework incorporates dynamic prompt assembly, runtime state management, and multi-level safety verification mechanisms, alongside a dual-dimension evaluation metric—balancing performance and safety—that supports end-to-end validation in both simulation and real-world robotic platforms. Compared to baseline methods, our approach improves task success rate by 30.8% under prompt injection attacks and up to 325% in highly dynamic adversarial settings. These gains significantly enhance system robustness and deployability, thereby bridging a critical gap in the reliable integration of LLMs into embodied intelligence systems.

Technology Category

Application Category

📝 Abstract
Integrating large language models (LLMs) into robotic systems has revolutionised embodied artificial intelligence, enabling advanced decision-making and adaptability. However, ensuring reliability, encompassing both security against adversarial attacks and safety in complex environments, remains a critical challenge. To address this, we propose a unified framework that mitigates prompt injection attacks while enforcing operational safety through robust validation mechanisms. Our approach combines prompt assembling, state management, and safety validation, evaluated using both performance and security metrics. Experiments show a 30.8% improvement under injection attacks and up to a 325% improvement in complex environment settings under adversarial conditions compared to baseline scenarios. This work bridges the gap between safety and security in LLM-based robotic systems, offering actionable insights for deploying reliable LLM-integrated mobile robots in real-world settings. The framework is open-sourced with simulation and physical deployment demos at https://llmeyesim.vercel.app/
Problem

Research questions and friction points this paper is trying to address.

Enhancing reliability in LLM-integrated robotic systems
Mitigating security risks from adversarial prompt injection attacks
Ensuring operational safety in complex robotic environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework mitigates prompt injection attacks
Combines prompt assembling and state management
Enforces safety through robust validation mechanisms
W
Wenxiao Zhang
The University of Western Australia, 35 Stirling Hwy, Perth, 6009, WA, Australia
X
Xiangrui Kong
The University of Western Australia, 35 Stirling Hwy, Perth, 6009, WA, Australia
C
Conan Dewitt
The University of Western Australia, 35 Stirling Hwy, Perth, 6009, WA, Australia
Thomas Bräunl
Thomas Bräunl
Professor in Electrical and Computer Engineering, The University of Western Australia
RoboticsAutomationElectromobilityAutonomous DrivingSimulation
Jin B. Hong
Jin B. Hong
The University of Western Australia
CybersecurityMoving Target DefensePrivacy