🤖 AI Summary
Existing JIT-SDP datasets suffer from high label noise, coarse-grained defect localization, and insufficient semantic modeling of code changes by pre-trained language models (PLMs). To address these issues, we propose ReDef—the first high-confidence, function-level defect change dataset (3,164 defective / 10,268 clean samples), constructed via rollback-commit anchoring and GPT-assisted multi-round auditing. Methodologically, we introduce counterfactual perturbation analysis—the first such application in JIT-SDP—revealing that mainstream PLMs (CodeBERT, CodeT5+, UniXcoder) heavily rely on superficial lexical cues rather than edit semantics. We systematically evaluate five input encodings and identify compact diff format as optimal. Our core contributions are threefold: (1) a high-quality, rigorously curated benchmark dataset; (2) an interpretable, semantics-aware evaluation paradigm for JIT-SDP; and (3) empirical evidence demonstrating the fundamental limitation of current PLMs in capturing semantic intent of code changes for just-in-time defect prediction.
📝 Abstract
Just-in-Time software defect prediction (JIT-SDP) plays a critical role in prioritizing risky code changes during code review and continuous integration. However, existing datasets often suffer from noisy labels and low precision in identifying bug-inducing commits. To address this, we present ReDef (Revert-based Defect dataset), a high-confidence benchmark of function-level modifications curated from 22 large-scale C/C++ projects. Defective cases are anchored by revert commits, while clean cases are validated through post-hoc history checks. Ambiguous instances are conservatively filtered out via a GPT-assisted triage process involving multiple votes and audits. This pipeline yields 3,164 defective and 10,268 clean modifications, offering substantially more reliable labels than prior existing resources. Beyond dataset construction, we provide the first systematic evaluation of how pre-trained language models (PLMs) reason about code modifications -- specifically, which input encodings most effectively expose change information, and whether models genuinely capture edit semantics. We fine-tune CodeBERT, CodeT5+, and UniXcoder under five encoding strategies, and further probe their sensitivity through counterfactual perturbations that swap added/deleted blocks, invert diff polarity, or inject spurious markers. Our results show that compact diff-style encodings consistently outperform whole-function formats across all PLMs, with statistical tests confirming large, model-independent effects. However, under counterfactual tests, performance degrades little or not at all -- revealing that what appears to be robustness in fact reflects reliance on superficial cues rather than true semantic understanding. These findings indicate that, unlike in snapshot-based tasks, current PLMs remain limited in their ability to genuinely comprehend code modifications.