SoK: Cybersecurity Assessment of Humanoid Ecosystem

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Humanoid robots are rapidly deploying in healthcare and industrial settings, yet their complex architecture—integrating hardware, ROS middleware, OTA channels, and human–robot interaction—introduces novel cross-layer security risks. Existing research predominantly addresses isolated threats, lacking systematic modeling of cascading attacks. Method: We propose the first seven-layer security architecture tailored for humanoid robots, formalized as a fine-grained matrix covering 39 attack types and 35 defensive mechanisms. We introduce a quantitative evaluation framework combining risk-weighted scoring with Monte Carlo validation to enable cross-component cascading risk analysis, platform benchmarking, and security investment prioritization. Grounded in a Systematization of Knowledge (SoK) methodology, we integrate robotics, cyber-physical systems (CPS), and cybersecurity principles. Results: Empirical evaluation across Pepper, G1 EDU, and Digit platforms yields security maturity scores of 39.9%–79.5%, demonstrating both methodological validity and practical deployability.

Technology Category

Application Category

📝 Abstract
Humanoids are progressing toward practical deployment across healthcare, industrial, defense, and service sectors. While typically considered cyber-physical systems (CPSs), their dependence on traditional networked software stacks (e.g., Linux operating systems), robot operating system (ROS) middleware, and over-the-air update channels, creates a distinct security profile that exposes them to vulnerabilities conventional CPS models do not fully address. Prior studies have mainly examined specific threats, such as LiDAR spoofing or adversarial machine learning (AML). This narrow focus overlooks how an attack targeting one component can cascade harm throughout the robot's interconnected systems. We address this gap through a systematization of knowledge (SoK) that takes a comprehensive approach, consolidating fragmented research from robotics, CPS, and network security domains. We introduce a seven-layer security model for humanoid robots, organizing 39 known attacks and 35 defenses across the humanoid ecosystem-from hardware to human-robot interaction. Building on this security model, we develop a quantitative 39x35 attack-defense matrix with risk-weighted scoring, validated through Monte Carlo analysis. We demonstrate our method by evaluating three real-world robots: Pepper, G1 EDU, and Digit. The scoring analysis revealed varying security maturity levels, with scores ranging from 39.9% to 79.5% across the platforms. This work introduces a structured, evidence-based assessment method that enables systematic security evaluation, supports cross-platform benchmarking, and guides prioritization of security investments in humanoid robotics.
Problem

Research questions and friction points this paper is trying to address.

Assessing cybersecurity risks in humanoid robots' ecosystem
Addressing cascading vulnerabilities across interconnected robotic systems
Developing a structured security model for humanoid robot evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed a seven-layer security model for humanoids
Created a quantitative 39x35 attack-defense matrix
Introduced risk-weighted scoring with Monte Carlo validation
🔎 Similar Papers
No similar papers found.