From What to How: Bridging User Requirements with Software Development Using Large Language Models

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited support of current large language models (LLMs) for the software design phase, where accurately translating requirements into implementable designs remains a challenge. To bridge this gap, we introduce DesBench, the first evaluation benchmark specifically tailored for software design, comprising 30 manually curated Java projects that include requirements documents, design models, implementation code, and test cases. Using DesBench, we systematically assess mainstream LLMs across three key tasks: design-aware code generation, object-oriented modeling, and acceptance test design. Our findings reveal that LLMs struggle to produce correct code when no design or only high-level design is provided, exhibit significant deficiencies in modeling class relationships, yet generate acceptance tests that achieve code coverage comparable to human-written ones.

Technology Category

Application Category

📝 Abstract
Recently, large language models (LLMs) are extensively utilized to enhance development efficiency, leading to numerous benchmarks for evaluating their performance. However, these benchmarks predominantly focus on implementation, overlooking the equally critical aspect of software design. This gap raises two pivotal questions: (1) Can LLMs handle software design? (2) Can LLMs write code following the specific designs? To investigate these questions, this paper proposes DesBench, a design-aware benchmark for evaluating LLMs on three software design-related tasks: design-aware code generation, object-oriented modeling, and the design of acceptance test cases. DesBench comprises 30 manually crafted Java projects that include requirement documents, design models, implementations, and acceptance tests, amounting to a total of 30 design models, 194 Java classes, and 737 test cases. We evaluated seven state-of-the-art LLMs, including three DeepSeek R1, two Qwen2.5, and two GPT models, using DesBench. The results reveal that LLMs remain significantly challenged by the intricacies of software design: (1) For code generation, LLMs struggle to produce correct implementations when provided with only high-level or no designs. (2) In object-oriented modeling, while LLMs can accurately identify objects and classes, they face challenges in defining operations and inter-class relationships. (3) Acceptance test cases generated by LLMs from functional requirements achieve code coverage quality comparable to those written by humans. Our research highlights the current limitations of LLMs in managing software design and calls for further investigation into new design methodologies and languages suitable for LLM-based development.
Problem

Research questions and friction points this paper is trying to address.

software design
large language models
code generation
object-oriented modeling
acceptance testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

design-aware benchmark
large language models
software design
object-oriented modeling
acceptance test generation
🔎 Similar Papers
No similar papers found.
X
Xiao He
University of Science and Technology Beijing, Beijing, China
R
Ru Chen
University of Science and Technology Beijing, Beijing, China
Jialun Cao
Jialun Cao
The Hong Kong University of Science and Technology
SE for AIAI for SE