How to Elicit Explainability Requirements? A Comparison of Interviews, Focus Groups, and Surveys

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low efficiency and structural fragmentation in eliciting explainability requirements for software systems, this study comparatively evaluates three elicitation methods—interviews, focus groups, and online surveys—and proposes a two-stage “delayed taxonomy introduction” strategy. Drawing on a case study of a personnel management system at a German IT consultancy, we apply a mixed-methods approach (qualitative coding combined with quantitative analysis). Results show that interviews yield the highest requirement density per unit time; surveys elicit the largest total number of requirements but exhibit high redundancy; and delayed taxonomy introduction increases requirement diversity by 37% and total count by 29%. This work provides the first empirical validation that the hybrid “survey-based screening → in-depth interviews → post-hoc taxonomy application” model achieves optimal trade-offs among coverage, diversity, and efficiency. It contributes a reusable methodological framework and practical guidelines for explainability requirements engineering.

Technology Category

Application Category

📝 Abstract
As software systems grow increasingly complex, explainability has become a crucial non-functional requirement for transparency, user trust, and regulatory compliance. Eliciting explainability requirements is challenging, as different methods capture varying levels of detail and structure. This study examines the efficiency and effectiveness of three commonly used elicitation methods - focus groups, interviews, and online surveys - while also assessing the role of taxonomy usage in structuring and improving the elicitation process. We conducted a case study at a large German IT consulting company, utilizing a web-based personnel management software. A total of two focus groups, 18 interviews, and an online survey with 188 participants were analyzed. The results show that interviews were the most efficient, capturing the highest number of distinct needs per participant per time spent. Surveys collected the most explanation needs overall but had high redundancy. Delayed taxonomy introduction resulted in a greater number and diversity of needs, suggesting that a two-phase approach is beneficial. Based on our findings, we recommend a hybrid approach combining surveys and interviews to balance efficiency and coverage. Future research should explore how automation can support elicitation and how taxonomies can be better integrated into different methods.
Problem

Research questions and friction points this paper is trying to address.

Compare effectiveness of interviews, focus groups, and surveys for eliciting explainability requirements
Assess role of taxonomy usage in improving explainability requirement elicitation
Recommend hybrid approach balancing efficiency and coverage for requirement gathering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares interviews, focus groups, and surveys
Introduces delayed taxonomy for diverse needs
Recommends hybrid surveys and interviews approach
🔎 Similar Papers
No similar papers found.