🤖 AI Summary
In autonomous driving AI perception system (AIePS) development, annotation quality critically impacts model safety and reliability; however, empirical understanding of how annotation errors originate and propagate across multi-organizational automotive supply chains remains lacking. This study addresses this gap by adopting dual perspectives—annotation lifecycle and supply chain—through semi-structured interviews with 20 domain experts from six organizations (50 hours total) and six-phase thematic coding analysis. We propose the first annotation defect taxonomy for AI perception systems, comprising 18 error types organized along three dimensions: completeness, accuracy, and consistency. Analogous to Failure Mode and Effects Analysis (FMEA), this taxonomy serves as a “failure mode library” for annotation defects. Validated by industry practitioners, it supports root-cause analysis, supplier evaluation, annotator onboarding, and annotation guideline refinement—thereby enhancing AIePS development reliability and safety.
📝 Abstract
Data annotation is essential but highly error-prone in the development of AI-enabled perception systems (AIePS) for automated driving, and its quality directly influences model performance, safety, and reliability. However, the industry lacks empirical insights into how annotation errors emerge and spread across the multi-organisational automotive supply chain. This study addresses this gap through a multi-organisation case study involving six companies and four research institutes across Europe and the UK. Based on 19 semi-structured interviews with 20 experts (50 hours of transcripts) and a six-phase thematic analysis, we develop a taxonomy of 18 recurring annotation error types across three data-quality dimensions: completeness (e.g., attribute omission, missing feedback loops, edge-case omissions, selection bias), accuracy (e.g., mislabelling, bounding-box inaccuracies, granularity mismatches, bias-driven errors), and consistency (e.g., inter-annotator disagreement, ambiguous instructions, misaligned hand-offs, cross-modality inconsistencies). The taxonomy was validated with industry practitioners, who reported its usefulness for root-cause analysis, supplier quality reviews, onboarding, and improving annotation guidelines. They described it as a failure-mode catalogue similar to FMEA. By conceptualising annotation quality as a lifecycle and supply-chain issue, this study contributes to SE4AI by offering a shared vocabulary, diagnostic toolset, and actionable guidance for building trustworthy AI-enabled perception systems.