Data Annotation Quality Problems in AI-Enabled Perception System Development

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In autonomous driving AI perception system (AIePS) development, annotation quality critically impacts model safety and reliability; however, empirical understanding of how annotation errors originate and propagate across multi-organizational automotive supply chains remains lacking. This study addresses this gap by adopting dual perspectives—annotation lifecycle and supply chain—through semi-structured interviews with 20 domain experts from six organizations (50 hours total) and six-phase thematic coding analysis. We propose the first annotation defect taxonomy for AI perception systems, comprising 18 error types organized along three dimensions: completeness, accuracy, and consistency. Analogous to Failure Mode and Effects Analysis (FMEA), this taxonomy serves as a “failure mode library” for annotation defects. Validated by industry practitioners, it supports root-cause analysis, supplier evaluation, annotator onboarding, and annotation guideline refinement—thereby enhancing AIePS development reliability and safety.

Technology Category

Application Category

📝 Abstract
Data annotation is essential but highly error-prone in the development of AI-enabled perception systems (AIePS) for automated driving, and its quality directly influences model performance, safety, and reliability. However, the industry lacks empirical insights into how annotation errors emerge and spread across the multi-organisational automotive supply chain. This study addresses this gap through a multi-organisation case study involving six companies and four research institutes across Europe and the UK. Based on 19 semi-structured interviews with 20 experts (50 hours of transcripts) and a six-phase thematic analysis, we develop a taxonomy of 18 recurring annotation error types across three data-quality dimensions: completeness (e.g., attribute omission, missing feedback loops, edge-case omissions, selection bias), accuracy (e.g., mislabelling, bounding-box inaccuracies, granularity mismatches, bias-driven errors), and consistency (e.g., inter-annotator disagreement, ambiguous instructions, misaligned hand-offs, cross-modality inconsistencies). The taxonomy was validated with industry practitioners, who reported its usefulness for root-cause analysis, supplier quality reviews, onboarding, and improving annotation guidelines. They described it as a failure-mode catalogue similar to FMEA. By conceptualising annotation quality as a lifecycle and supply-chain issue, this study contributes to SE4AI by offering a shared vocabulary, diagnostic toolset, and actionable guidance for building trustworthy AI-enabled perception systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses data annotation quality issues in AI-enabled perception systems for automated driving
Identifies recurring annotation error types across completeness, accuracy, and consistency dimensions
Provides taxonomy and tools for improving annotation quality across multi-organizational supply chains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-organization case study analyzes annotation errors
Taxonomy of 18 error types across three dimensions
Validated diagnostic tool for lifecycle quality improvement
🔎 Similar Papers
No similar papers found.