Breaking New Ground in Software Defect Prediction: Introducing Practical and Actionable Metrics with Superior Predictive Power for Enhanced Decision-Making

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited interpretability and practicality of software defect prediction by systematically incorporating human error theory into method-level defect prediction for the first time, establishing a developer habit–based human error framework. We propose a set of actionable, non-code metrics—including cognitive load and habit deviation—that complement or replace conventional code- and history-based measures. Empirical evaluation across 21 critical open-source projects demonstrates that these metrics significantly improve prediction performance (average AUC increase of 4.2%), exhibit importance distributions aligned with developer intuition, and precisely identify high-risk behavioral patterns. Consequently, they enable actionable, team-level intervention recommendations. This work advances defect prediction from opaque, black-box alerts to a paradigm of behavior-rooted diagnosis coupled with prescriptive guidance.

Technology Category

Application Category

📝 Abstract
Software defect prediction using code metrics has been extensively researched over the past five decades. However, prediction harnessing non-software metrics is under-researched. Considering that the root cause of software defects is often attributed to human error, human factors theory might offer key forecasting metrics for actionable insights. This paper explores automated software defect prediction at the method level based on the developers' coding habits. First, we propose a framework for deciding the metrics to conduct predictions. Next, we compare the performance of our metrics to that of the code and commit history metrics shown by research to achieve the highest performance to date. Finally, we analyze the prediction importance of each metric. As a result of our analyses of twenty-one critical infrastructure large-scale open-source software projects, we have presented: (1) a human error-based framework with metrics useful for defect prediction at method level; (2) models using our proposed metrics achieve better average prediction performance than the state-of-the-art code metrics and history measures; (3) the prediction importance of all metrics distributes differently with each of the novel metrics having better average importance than code and history metrics; (4) the novel metrics dramatically enhance the explainability, practicality, and actionability of software defect prediction models, significantly advancing the field. We present a systematic approach to forecasting defect-prone software methods via a human error framework. This work empowers practitioners to act on predictions, empirically demonstrating how developer coding habits contribute to defects in software systems.
Problem

Research questions and friction points this paper is trying to address.

Predicting software defects using developer coding habits
Comparing human error metrics with traditional code metrics
Enhancing defect prediction explainability and actionability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human error-based framework for defect prediction
Developer coding habits as key metrics
Superior performance over code and history metrics
🔎 Similar Papers
No similar papers found.
C
Carlos Andrés Ramírez Cataño
Doctoral Program in Risk and Resilience Engineering, Graduate School of Science and Technology, University of Tsukuba, 305-8573 Japan
Makoto Itoh
Makoto Itoh
University of Tsukuba
cognitive systems engineeringADAS