LLMCFG-TGen: Using LLM-Generated Control Flow Graphs to Automatically Create Test Cases from Use Cases

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based test case generation approaches often suffer from incomplete path coverage, high redundancy, and difficulty in modeling complex conditional logic. This paper proposes a requirement-driven test generation method that leverages LLMs to synthesize control flow graphs (CFGs): first, an LLM parses natural-language requirements to construct a structured CFG; then, graph traversal algorithms enumerate all complete execution paths, which are automatically translated into executable test cases. To our knowledge, this is the first work to systematically integrate LLM-generated CFGs into requirement-based testing, enabling full-path coverage and semantically precise modeling of program logic. Experimental evaluation confirms that the generated CFGs are structurally sound and path-complete, yielding logically clear, non-redundant test cases. The approach significantly improves automation level and test completeness, and has been validated and adopted in industrial practice.

Technology Category

Application Category

📝 Abstract
Appropriate test case generation is critical in software testing, significantly impacting the quality of the testing. Requirements-Based Test Generation (RBTG) derives test cases from software requirements, aiming to verify whether or not the system's behaviors align with user needs and expectations. Requirements are often documented in Natural Language (NL), with use-case descriptions being a popular method for capturing functional behaviors and interaction flows in a structured form. Large Language Models (LLMs) have shown strong potential for automating test generation directly from NL requirements. However, current LLM-based approaches may not provide comprehensive, non-redundant coverage. They may also fail to capture complex conditional logic in requirements, resulting in incomplete test cases. We propose a new approach that automatically generates test cases from NL use-case descriptions, called Test Generation based on LLM-generated Control Flow Graphs (LLMCFG-TGen). LLMCFG-TGen comprises three main steps: (1) An LLM transforms a use case into a structured CFG that encapsulates all potential branches; (2) The generated CFG is explored, and all complete execution paths are enumerated; and (3) The execution paths are then used to generate the test cases. To evaluate our proposed approach, we conducted a series of experiments. The results show that LLMs can effectively construct well-structured CFGs from NL use cases. Compared with the baseline methods, LLMCFG-TGen achieves full path coverage, improving completeness and ensuring clear and accurate test cases. Practitioner assessments confirm that LLMCFG-TGen produces logically consistent and comprehensive test cases, while substantially reducing manual effort. The findings suggest that coupling LLM-based semantic reasoning with structured modeling effectively bridges the gap between NL requirements and systematic test generation.
Problem

Research questions and friction points this paper is trying to address.

Automatically generates test cases from natural language use-case descriptions
Addresses incomplete coverage and complex conditional logic in requirements
Enhances test completeness and reduces manual effort through structured modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM generates structured control flow graphs from use cases
Enumerates all execution paths for comprehensive coverage
Automates test case creation with reduced manual effort
🔎 Similar Papers
No similar papers found.
Z
Zhenzhen Yang
School of Computer Science and Engineering, Macau University of Science and Technology, China and School of Artificial Intelligence, Zhejiang Polytechnic University of Mechanical and Electrical Engineering, China
C
Chenhui Cui
School of Computer Science and Engineering, Macau University of Science and Technology, China
T
Tao Li
School of Computer Science and Engineering, Macau University of Science and Technology, China
Rubing Huang
Rubing Huang
Macau University of Science and Technology
AI for Software EngineeringSoftware Engineering for AISoftware TestingAI Applications
Nan Niu
Nan Niu
University of North Florida
Software EngineeringRequirements EngineeringMultimedia ComputingHuman-Centered Computing
Dave Towey
Dave Towey
University of Nottingham Ningbo China
Software TestingMetamorphic TestingAdaptive Random TestingTechnology-enhanced Learning and InstructionComputer Literacy
Shikai Guo
Shikai Guo
Associate Professor, Dalian Maritime University
AI for EDAFPGA Logical SynthesisPlacement & RoutingCompile OptimizationSoftware Engineering