An Empirical Study on the Capability of LLMs in Decomposing Bug Reports

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the effectiveness of large language models (LLMs) in automatically decomposing complex privacy-related bug reports to enhance developers’ understanding of root causes and accelerate repair. We conduct the first systematic evaluation of ChatGPT and DeepSeek on a real-world Apache Jira privacy bug dataset, comparing zero-shot and few-shot prompting strategies. Results demonstrate that prompt quality is decisive for performance: few-shot prompting substantially improves true decomposition rates (ChatGPT +140%, DeepSeek +163.64%). Our contributions are threefold: (1) we establish bug report decomposition as a novel, practically relevant LLM application task; (2) we identify and analyze the key mechanisms by which few-shot prompting enhances decomposition accuracy—particularly through improved structural alignment and domain-aware grounding; and (3) we empirically validate LLMs’ capability in comprehending and structurally disentangling privacy-specific defects, while highlighting persistent limitations in fine-grained accuracy and domain-specific semantic modeling.

Technology Category

Application Category

📝 Abstract
Background: Bug reports are essential to the software development life cycle. They help developers track and resolve issues, but are often difficult to process due to their complexity, which can delay resolution and affect software quality. Aims: This study investigates whether large language models (LLMs) can assist developers in automatically decomposing complex bug reports into smaller, self-contained units, making them easier to understand and address. Method: We conducted an empirical study on 127 resolved privacy-related bug reports collected from Apache Jira. We evaluated ChatGPT and DeepSeek using different prompting strategies. We first tested both LLMs with zero-shot prompts, then applied improved prompts with demonstrations (using few-shot prompting) to measure their abilities in bug decomposition. Results: Our findings show that LLMs are capable of decomposing bug reports, but their overall performance still requires further improvement and strongly depends on the quality of the prompts. With zero-shot prompts, both studied LLMs (ChatGPT and DeepSeek) performed poorly. After prompt tuning, ChatGPT's true decomposition rate increased by 140% and DeepSeek's by 163.64%. Conclusions: LLMs show potential in helping developers analyze and decompose complex bug reports, but they still need improvement in terms of accuracy and bug understanding.
Problem

Research questions and friction points this paper is trying to address.

Investigates LLMs' ability to decompose complex bug reports
Evaluates ChatGPT and DeepSeek using different prompting strategies
Shows LLMs' potential but highlights need for accuracy improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs decompose bug reports automatically
Few-shot prompting improves decomposition accuracy
Prompt tuning boosts performance significantly
🔎 Similar Papers
No similar papers found.
Z
Zhiyuan Chen
Rochester Institute of Technology, Rochester, New York, United States
V
Vanessa Nava-Camal
Rochester Institute of Technology, Rochester, New York, United States
A
Ahmad D. Suleiman
Rochester Institute of Technology, Rochester, New York, United States
Y
Yiming Tang
Rochester Institute of Technology, Rochester, New York, United States
Daqing Hou
Daqing Hou
Rochester Institute of Technology
Software EngineeringCybersecurityBehavioral BiometricsEducation ResearchSmart Energy
W
Weiyi Shang
University of Waterloo, Waterloo, Ontraio, Canada