What Makes a Good Response? An Empirical Analysis of Quality in Qualitative Interviews

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the absence of empirically validated quality metrics for evaluating the contribution of interview responses to qualitative research objectives. Building a corpus of 343 interview transcripts comprising 16,940 responses, the authors systematically assess the predictive validity of ten established quality indicators with respect to their research utility. Integrating qualitative content analysis, natural language processing, and statistical modeling, the findings demonstrate that direct relevance to the core research question is the strongest predictor of a response’s value, whereas commonly used NLP-based metrics—such as clarity and surprise-based informativeness—show no significant predictive power. These results challenge the applicability of current automated evaluation approaches and provide empirical grounding for assessing response quality in qualitative inquiry.
📝 Abstract
Qualitative interviews provide essential insights into human experiences when they elicit high-quality responses. While qualitative and NLP researchers have proposed various measures of interview quality, these measures lack validation that high-scoring responses actually contribute to the study's goals. In this work, we identify, implement, and evaluate 10 proposed measures of interview response quality to determine which are actually predictive of a response's contribution to the study findings. To conduct our analysis, we introduce the Qualitative Interview Corpus, a newly constructed dataset of 343 interview transcripts with 16,940 participant responses from 14 real research projects. We find that direct relevance to a key research question is the strongest predictor of response quality. We additionally find that two measures commonly used to evaluate NLP interview systems, clarity and surprisal-based informativeness, are not predictive of response quality. Our work provides analytic insights and grounded, scalable metrics to inform the design of qualitative studies and the evaluation of automated interview systems.
Problem

Research questions and friction points this paper is trying to address.

interview quality
qualitative interviews
response quality
research contribution
empirical validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

interview response quality
empirical validation
Qualitative Interview Corpus
relevance to research question
automated interview systems
🔎 Similar Papers
No similar papers found.