Using Large Language Models to Develop Requirements Elicitation Skills

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Requirements engineering (RE) education lacks low-cost, high-fidelity environments for interview training. Method: This study proposes an LLM-based interactive virtual client pedagogy, constructing conversational, scalable, LLM-driven virtual stakeholders to enable real-time, online interview practice. It integrates a human–AI collaborative interview framework and hybrid assessment—combining qualitative feedback with quantitative behavioral analysis. Contribution/Results: This is the first systematic application of LLMs to cultivate dynamic interviewing competencies in RE education. The approach maintains technical correctness and efficiency while significantly enhancing student immersion and perceived authenticity. An empirical evaluation shows 87% of students prefer this mode, validating its innovation in scalability, accessibility, and pedagogical effectiveness.

Technology Category

Application Category

📝 Abstract
Requirements Elicitation (RE) is a crucial software engineering skill that involves interviewing a client and then devising a software design based on the interview results. Teaching this inherently experiential skill effectively has high cost, such as acquiring an industry partner to interview, or training course staff or other students to play the role of a client. As a result, a typical instructional approach is to provide students with transcripts of real or fictitious interviews to analyze, which exercises the skill of extracting technical requirements but fails to develop the equally important interview skill itself. As an alternative, we propose conditioning a large language model to play the role of the client during a chat-based interview. We perform a between-subjects study (n=120) in which students construct a high-level application design from either an interactive LLM-backed interview session or an existing interview transcript describing the same business processes. We evaluate our approach using both a qualitative survey and quantitative observations about participants' work. We find that both approaches provide sufficient information for participants to construct technically sound solutions and require comparable time on task, but the LLM-based approach is preferred by most participants. Importantly, we observe that LLM-backed interview is seen as both more realistic and more engaging, despite the LLM occasionally providing imprecise or contradictory information. These results, combined with the wide accessibility of LLMs, suggest a new way to practice critical RE skills in a scalable and realistic manner without the overhead of arranging live interviews.
Problem

Research questions and friction points this paper is trying to address.

High cost of teaching Requirements Elicitation skills effectively.
Traditional methods fail to develop interview skills adequately.
Proposing LLM-based interactive interviews as a scalable alternative.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM simulates client for interactive RE training
Between-subjects study compares LLM vs transcript methods
LLM approach enhances realism and engagement in RE
🔎 Similar Papers
No similar papers found.
N
Nelson Lojo
Univ. of California, Berkeley, CA, USA
R
Rafael Gonz'alez
SCORE Lab, Univ. of Sevilla, Sevilla, Spain
R
Rohan Philip
Univ. of California, Berkeley, CA, USA
J
J. A. Parejo
SCORE Lab, Univ. of Sevilla, Sevilla, Spain
A
Amador Dur'an Toro
SCORE Lab, Univ. of Sevilla, Sevilla, Spain
Armando Fox
Armando Fox
UC Berkeley
Computer science educationeducation technologymachine learningprogramming systemshistory of computing
P
Pablo Fern'andez
SCORE Lab, Univ. of Sevilla, Sevilla, Spain