🤖 AI Summary
IoEVs face critical trust and performance bottlenecks, including frequent cyberattacks, unreliable battery state estimation, and opaque decision-making. To address these challenges, this paper proposes a multi-agent AI framework for trustworthy autonomous IoEVs, comprising three synergistic agents: security defense, battery state analysis, and explainable decision-making. Methodologically, it innovatively integrates large language model (LLM)-driven reasoning, adversarial-aware learning, and uncertainty quantification to design an interpretable threat mitigation mechanism and a robust state-of-charge (SOC) prediction model. Furthermore, dynamic tool invocation and formal optimization enable user-centric, autonomous service coordination. Experimental evaluation across diverse IoEV scenarios demonstrates significant improvements: +23.6% in attack detection rate and 41.2% reduction in SOC prediction MAE. All datasets, models, and source code are publicly released to foster reproducibility and community advancement.
📝 Abstract
The Internet of Electric Vehicles (IoEV) envisions a tightly coupled ecosystem of electric vehicles (EVs), charging infrastructure, and grid services, yet it remains vulnerable to cyberattacks, unreliable battery-state predictions, and opaque decision processes that erode trust and performance. To address these challenges, we introduce a novel Agentic Artificial Intelligence (AAI) framework tailored for IoEV, where specialized agents collaborate to deliver autonomous threat mitigation, robust analytics, and interpretable decision support. Specifically, we design an AAI architecture comprising dedicated agents for cyber-threat detection and response at charging stations, real-time State of Charge (SoC) estimation, and State of Health (SoH) anomaly detection, all coordinated through a shared, explainable reasoning layer; develop interpretable threat-mitigation mechanisms that proactively identify and neutralize attacks on both physical charging points and learning components; propose resilient SoC and SoH models that leverage continuous and adversarial-aware learning to produce accurate, uncertainty-aware forecasts with human-readable explanations; and implement a three-agent pipeline, where each agent uses LLM-driven reasoning and dynamic tool invocation to interpret intent, contextualize tasks, and execute formal optimizations for user-centric assistance. Finally, we validate our framework through comprehensive experiments across diverse IoEV scenarios, demonstrating significant improvements in security and prediction accuracy. All datasets, models, and code will be released publicly.