AI Safety Assurance in Electric Vehicles: A Case Study on AI-Driven SOC Estimation

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI components deployed in automotive safety-critical systems—such as battery state-of-charge (SOC) estimation—lack standardized, verifiable functional safety assessment methodologies. Method: This paper proposes the first integrated safety evaluation framework for in-vehicle AI components by harmonizing ISO 26262 (functional safety) and ISO/PAS 8800 (AI system safety). It introduces a fault-injection–based robustness testing methodology that quantifies AI model behavioral consistency and failure-response capability under sensor input perturbations simulating anomalous operating conditions. Contribution/Results: Experimental validation on an AI-driven SOC estimation module demonstrates the framework’s effectiveness in assessing compliance with functional safety requirements, significantly enhancing testability and traceability of AI safety properties. This work establishes the first implementable technical pathway and practical paradigm for standardized safety certification of AI in automotive safety-critical applications.

Technology Category

Application Category

📝 Abstract
Integrating Artificial Intelligence (AI) technology in electric vehicles (EV) introduces unique challenges for safety assurance, particularly within the framework of ISO 26262, which governs functional safety in the automotive domain. Traditional assessment methodologies are not geared toward evaluating AI-based functions and require evolving standards and practices. This paper explores how an independent assessment of an AI component in an EV can be achieved when combining ISO 26262 with the recently released ISO/PAS 8800, whose scope is AI safety for road vehicles. The AI-driven State of Charge (SOC) battery estimation exemplifies the process. Key features relevant to the independent assessment of this extended evaluation approach are identified. As part of the evaluation, robustness testing of the AI component is conducted using fault injection experiments, wherein perturbed sensor inputs are systematically introduced to assess the component's resilience to input variance.
Problem

Research questions and friction points this paper is trying to address.

Assuring AI safety in electric vehicles under ISO 26262
Evaluating AI-based functions with evolving standards practices
Testing AI robustness through fault injection experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining ISO 26262 with ISO/PAS 8800 standards
Robustness testing using fault injection experiments
Systematic assessment of AI component resilience
🔎 Similar Papers
No similar papers found.