Explainability as a Compliance Requirement: What Regulated Industries Need from AI Tools for Design Artifact Generation

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In safety-critical, regulated domains (e.g., aviation, healthcare), AI-driven design artifact generation tools suffer from poor explainability, leading to non-auditable decisions, high manual verification overhead, fragmented cross-role collaboration, and regulatory non-compliance risks. To address this, we conducted semi-structured interviews with domain experts and performed qualitative analysis to systematically identify explainability bottlenecks hindering the integration of NLP tools into requirements engineering. Building on these insights, we propose the first compliance-oriented explainability framework for such contexts, comprising four interlocking mechanisms: (1) decision provenance tracing, (2) explicit justification of generated artifacts, (3) domain-knowledge-aware adaptation support, and (4) automated regulatory compliance validation. Empirical evaluation demonstrates that the framework significantly enhances output transparency and trustworthiness, reduces manual review effort, strengthens stakeholder confidence, and improves collaborative efficiency—thereby offering both a practical implementation pathway and theoretical foundation for responsible AI deployment in high-assurance settings.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) tools for automating design artifact generation are increasingly used in Requirements Engineering (RE) to transform textual requirements into structured diagrams and models. While these AI tools, particularly those based on Natural Language Processing (NLP), promise to improve efficiency, their adoption remains limited in regulated industries where transparency and traceability are essential. In this paper, we investigate the explainability gap in AI-driven design artifact generation through semi-structured interviews with ten practitioners from safety-critical industries. We examine how current AI-based tools are integrated into workflows and the challenges arising from their lack of explainability. We also explore mitigation strategies, their impact on project outcomes, and features needed to improve usability. Our findings reveal that non-explainable AI outputs necessitate extensive manual validation, reduce stakeholder trust, struggle to handle domain-specific terminology, disrupt team collaboration, and introduce regulatory compliance risks, often negating the anticipated efficiency benefits. To address these issues, we identify key improvements, including source tracing, providing clear justifications for tool-generated decisions, supporting domain-specific adaptation, and enabling compliance validation. This study outlines a practical roadmap for improving the transparency, reliability, and applicability of AI tools in requirements engineering workflows, particularly in regulated and safety-critical environments where explainability is crucial for adoption and certification.
Problem

Research questions and friction points this paper is trying to address.

Addressing explainability gap in AI-driven design artifact generation
Overcoming transparency and traceability challenges in regulated industries
Mitigating compliance risks from non-explainable AI outputs in RE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Source tracing for AI-generated design artifacts
Clear justifications for tool-generated decisions
Domain-specific adaptation and compliance validation
🔎 Similar Papers
No similar papers found.