RE for AI in Practice: Managing Data Annotation Requirements for AI Autonomous Driving Systems

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges in managing data labeling requirements for autonomous driving AI perception systems (AIePS), including ambiguity, edge-case complexity, dynamic evolution, inconsistency, and resource constraints. Through semi-structured interviews and thematic analysis with 19 practitioners from six international industry organizations and four research institutions, we empirically investigate how labeling requirement definition, quality assurance, and evolution impact system safety, reliability, and regulatory compliance. We present the first empirical characterization of the propagation pathway linking labeling requirements to AI system performance. Based on our findings, we propose a novel three-dimensional best-practice framework integrating ethical and regulatory compliance, labeling guideline optimization, and embedded quality assurance. This work advances the interdisciplinary frontier of Software Engineering for AI (SE4AI) and Requirements Engineering for AI (RE4AI), offering actionable, evidence-based strategies to improve labeling quality, enhance system robustness, and support regulatory adherence.

Technology Category

Application Category

📝 Abstract
High-quality data annotation requirements are crucial for the development of safe and reliable AI-enabled perception systems (AIePS) in autonomous driving. Although these requirements play a vital role in reducing bias and enhancing performance, their formulation and management remain underexplored, leading to inconsistencies, safety risks, and regulatory concerns. Our study investigates how annotation requirements are defined and used in practice, the challenges in ensuring their quality, practitioner-recommended improvements, and their impact on AIePS development and performance. We conducted $19$ semi-structured interviews with participants from six international companies and four research organisations. Our thematic analysis reveals five main key challenges: ambiguity, edge case complexity, evolving requirements, inconsistencies, and resource constraints and three main categories of best practices, including ensuring compliance with ethical standards, improving data annotation requirements guidelines, and embedded quality assurance for data annotation requirements. We also uncover critical interrelationships between annotation requirements, annotation practices, annotated data quality, and AIePS performance and development, showing how requirement flaws propagate through the AIePS development pipeline. To the best of our knowledge, this study is the first to offer empirically grounded guidance on improving annotation requirements, offering actionable insights to enhance annotation quality, regulatory compliance, and system reliability. It also contributes to the emerging fields of Software Engineering (SE for AI) and Requirements Engineering (RE for AI) by bridging the gap between RE and AI in a timely and much-needed manner.
Problem

Research questions and friction points this paper is trying to address.

Managing data annotation requirements for autonomous driving AI systems
Addressing challenges in annotation quality and regulatory compliance
Investigating how requirement flaws impact AI system performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducts interviews to identify annotation requirement challenges
Proposes guidelines for improving data annotation requirements
Bridges requirements engineering and AI for system reliability
🔎 Similar Papers
No similar papers found.