LLMREI: Automating Requirements Elicitation Interviews with LLMs

๐Ÿ“… 2025-07-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Requirements elicitation interviews heavily rely on human analysts, leading to high resource consumption, significant subjective bias, and inefficient communication. To address these challenges, this paper proposes a fine-tuning-free, large language model (LLM)-driven automated interview framework that integrates zero-shot prompting with least-to-most prompting to enable context-adaptive questioning and dynamic requirement extraction. By eliminating domain-specific fine-tuning, the approach ensures strong cross-scenario generalizability. Empirical evaluation across 33 simulated interviews demonstrates that the system accurately captures critical requirements, achieves error rates comparable to human analysts, and exhibits robust contextual understanding and responsive capability. This work constitutes the first systematic validation of prompt engineeringโ€™s efficacy in automating requirements elicitation, establishing a deployable paradigm for scalable, consistent, and reliable requirement acquisition.

Technology Category

Application Category

๐Ÿ“ Abstract
Requirements elicitation interviews are crucial for gathering system requirements but heavily depend on skilled analysts, making them resource-intensive, susceptible to human biases, and prone to miscommunication. Recent advancements in Large Language Models present new opportunities for automating parts of this process. This study introduces LLMREI, a chat bot designed to conduct requirements elicitation interviews with minimal human intervention, aiming to reduce common interviewer errors and improve the scalability of requirements elicitation. We explored two main approaches, zero-shot prompting and least-to-most prompting, to optimize LLMREI for requirements elicitation and evaluated its performance in 33 simulated stakeholder interviews. A third approach, fine-tuning, was initially considered but abandoned due to poor performance in preliminary trials. Our study assesses the chat bot's effectiveness in three key areas: minimizing common interview errors, extracting relevant requirements, and adapting its questioning based on interview context and user responses. Our findings indicate that LLMREI makes a similar number of errors compared to human interviewers, is capable of extracting a large portion of requirements, and demonstrates a notable ability to generate highly context-dependent questions. We envision the greatest benefit of LLMREI in automating interviews with a large number of stakeholders.
Problem

Research questions and friction points this paper is trying to address.

Automating requirements elicitation interviews using LLMs
Reducing human biases and errors in interviews
Improving scalability of requirements gathering process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates requirements interviews using LLM chatbot
Employs zero-shot and least-to-most prompting
Adapts questions based on context dynamically
๐Ÿ”Ž Similar Papers
No similar papers found.