🤖 AI Summary
This study investigates whether large language models (LLMs) can bridge the gap between UX experts and non-experts in authoring user scenarios. In a controlled experiment, both groups authored scenarios with LLM assistance; outputs were evaluated via mixed methods—structured scoring and qualitative coding—assessing structural completeness, expressive clarity, and audience orientation. Results demonstrate, for the first time empirically, that LLMs significantly enhance non-experts’ performance: their scenarios achieve structural and clarity levels comparable to experts’, and—remarkably—surpass experts in articulating user perspectives. The findings validate LLMs as effective, democratized tools for requirements analysis and reveal their unique capacity to augment empathic user-centered expression. This work advances accessible UX practice by lowering barriers to rigorous scenario-based design.
📝 Abstract
The process of requirements analysis requires an understanding of the end users of a system. Thus, expert stakeholders, such as User Experience (UX) designers, usually create various descriptions containing information about the users and their possible needs. In our paper, we investigate to what extent UX novices are able to write such descriptions into user scenarios. We conducted a user study with 60 participants consisting of 30 UX experts and 30 novices who were asked to write a user scenario with or without the help of an LLM-supported writing assistant. Our findings show that LLMs empower laypersons to write reasonable user scenarios and provide first-hand insights for requirements analysis that are comparable to UX experts in terms of structure and clarity, while especially excelling at audience-orientation. We present our qualitative and quantitative findings, including user scenario anatomies, potential influences, and differences in the way participants approached the task.