🤖 AI Summary
Traditional structured surveys often fail to elicit deep, nuanced qualitative insights. Method: This work introduces a hybrid qualitative data collection paradigm by embedding theory-driven interview probes—descriptive, individual, clarifying, and explanatory—into an LLM-powered chatbot. We conduct the first systematic, three-phase (exploratory, requirements elicitation, evaluation) HCI study comparing probe efficacy using a split-plot experimental design and a multidimensional qualitative response quality framework. Contribution/Results: Probes significantly enhance response depth and informational richness; descriptive probes perform best in exploration, while explanatory probes excel in evaluation. User acceptance is high across phases. This work advances the theoretically grounded application of LLMs in human-centered research and provides a reusable methodological foundation for scalable, high-fidelity qualitative data collection.
📝 Abstract
Surveys are a widespread method for collecting data at scale, but their rigid structure often limits the depth of qualitative insights obtained. While interviews naturally yield richer responses, they are challenging to conduct across diverse locations and large participant pools. To partially bridge this gap, we investigate the potential of using LLM-based chatbots to support qualitative data collection through interview probes embedded in surveys. We assess four theory-based interview probes: descriptive, idiographic, clarifying, and explanatory. Through a split-plot study design (N=64), we compare the probes' impact on response quality and user experience across three key stages of HCI research: exploration, requirements gathering, and evaluation. Our results show that probes facilitate the collection of high-quality survey data, with specific probes proving effective at different research stages. We contribute practical and methodological implications for using chatbots as research tools to enrich qualitative data collection.