🤖 AI Summary
This paper addresses novel safety threats introduced by Agentic AI in Autonomous Guided Vehicles (AgVs), systematically uncovering cognitive-layer vulnerabilities and cross-layer interactions—spanning perception, control, and communication—that induce policy misjudgment and loss of vehicle control.
Method: We propose the first structured safety risk analysis framework tailored for automotive-grade AgV platforms, innovatively incorporating a role-based hierarchical architecture comprising individual agents and driving-policy agents. The framework integrates role-oriented modeling, attack-chain analysis, cross-layer threat mapping, and severity matrix evaluation.
Contribution/Results: We identify multiple previously undocumented cross-layer attack vectors not covered by OWASP, quantitatively characterize the propagation mechanism from minor perturbations to safety-critical failures, and establish a scalable, reproducible risk assessment benchmark for SAE Level 2+ to Level 4 autonomous driving systems.
📝 Abstract
Agentic AI is increasingly being explored and introduced in both manually driven and autonomous vehicles, leading to the notion of Agentic Vehicles (AgVs), with capabilities such as memory-based personalization, goal interpretation, strategic reasoning, and tool-mediated assistance. While frameworks such as the OWASP Agentic AI Security Risks highlight vulnerabilities in reasoning-driven AI systems, they are not designed for safety-critical cyber-physical platforms such as vehicles, nor do they account for interactions with other layers such as perception, communication, and control layers. This paper investigates security threats in AgVs, including OWASP-style risks and cyber-attacks from other layers affecting the agentic layer. By introducing a role-based architecture for agentic vehicles, consisting of a Personal Agent and a Driving Strategy Agent, we will investigate vulnerabilities in both agentic AI layer and cross-layer risks, including risks originating from upstream layers (e.g., perception layer, control layer, etc.). A severity matrix and attack-chain analysis illustrate how small distortions can escalate into misaligned or unsafe behavior in both human-driven and autonomous vehicles. The resulting framework provides the first structured foundation for analyzing security risks of agentic AI in both current and emerging vehicle platforms.