🤖 AI Summary
This work addresses security vulnerabilities in embodied AI systems, focusing on the Unitree Go2 quadruped robot platform. Method: We propose and empirically validate the first cross-layer vulnerability framework—“The Ten Sins of Embodied AI Security”—using a full-stack analysis methodology encompassing BLE sniffing, network traffic interception, APK reverse engineering, cloud API penetration testing, and hardware interface probing. Contribution/Results: Our analysis systematically identifies ten high-severity vulnerabilities distributed across three architectural layers—wireless configuration, core modules, and external interfaces—enabling threats including device hijacking, arbitrary command injection, sensitive data leakage, and physical-layer takeover. Moving beyond conventional AI jailbreaking paradigms, this study establishes a holistic, hardware-software co-designed security assessment for embodied systems. It demonstrates the feasibility of multi-layer attack chains and provides system-level defense recommendations tailored to embodied agents, thereby establishing both theoretical foundations and practical benchmarks for secure design of next-generation autonomous robots.
📝 Abstract
Embodied AI systems integrate language models with real world sensing, mobility, and cloud connected mobile apps. Yet while model jailbreaks have drawn significant attention, the broader system stack of embodied intelligence remains largely unexplored. In this work, we conduct the first holistic security analysis of the Unitree Go2 platform and uncover ten cross layer vulnerabilities the "Ten Sins of Embodied AI Security." Using BLE sniffing, traffic interception, APK reverse engineering, cloud API testing, and hardware probing, we identify systemic weaknesses across three architectural layers: wireless provisioning, core modules, and external interfaces. These include hard coded keys, predictable handshake tokens, WiFi credential leakage, missing TLS validation, static SSH password, multilingual safety bypass behavior, insecure local relay channels, weak binding logic, and unrestricted firmware access. Together, they allow adversaries to hijack devices, inject arbitrary commands, extract sensitive information, or gain full physical control.Our findings show that securing embodied AI requires far more than aligning the model itself. We conclude with system level lessons learned and recommendations for building embodied platforms that remain robust across their entire software hardware ecosystem.