A Dataset and Preliminary Study of Using GPT-5 for Code-change Impact Analysis

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Code change impact analysis is a critical task in software maintenance, yet existing approaches rely heavily on manual effort and suffer from low efficiency; systematic exploration of large language models (LLMs) for this task remains lacking. To address this, we introduce the first benchmark dataset for code impact analysis, encompassing seed changes, impacted change pairs, and change types. We propose a novel multi-granularity input paradigm integrating seed changes, parent commit trees, and diff hunks. Furthermore, we conduct the first systematic evaluation of GPT-5 and GPT-5-mini on this task. Experimental results show that current GPT-5 models exhibit limited overall capability, though incorporating diff hunks yields modest improvements; GPT-5 significantly outperforms GPT-5-mini. This work establishes a foundational benchmark, introduces a principled input methodology, and provides empirical baselines—thereby advancing LLM-driven automated impact analysis.

Technology Category

Application Category

📝 Abstract
Understanding source code changes and their impact on other code entities is a crucial skill in software development. However, the analysis of code changes and their impact is often performed manually and therefore is time-consuming. Recent advancements in AI, and in particular large language models (LLMs) show promises to help developers in various code analysis tasks. However, the extent to which this potential can be utilized for understanding code changes and their impact is underexplored. To address this gap, we study the capabilities of GPT-5 and GPT-5-mini to predict the code entities impacted by given source code changes. We construct a dataset containing information about seed-changes, change pairs, and change types for each commit. Existing datasets lack crucial information about seed changes and impacted code entities. Our experiments evaluate the LLMs in two configurations: (1) seed-change information and the parent commit tree and (2) seed-change information, the parent commit tree, and the diff hunk of each seed change. We found that both LLMs perform poorly in the two experiments, whereas GPT-5 outperforms GPT-5-mini. Furthermore, the provision of the diff hunks helps both models to slightly improve their performance.
Problem

Research questions and friction points this paper is trying to address.

Studying GPT-5's ability to predict code changes impact
Addressing the lack of datasets for code-change impact analysis
Evaluating LLMs with seed-change and diff information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructing dataset with seed-changes and impacted code entities
Evaluating GPT-5 and GPT-5-mini for code-change impact prediction
Using diff hunks alongside commit trees to enhance model performance
🔎 Similar Papers
No similar papers found.