GBM Returns the Best Prediction Performance among Regression Approaches: A Case Study of Stack Overflow Code Quality

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study pioneers the modeling of Java code quality prediction from Stack Overflow as a regression task, aiming to identify key determinants of code quality and comparatively evaluate mainstream regression models. Leveraging static analysis, we extract 27 code-quality features and systematically assess six regression methods: Gradient Boosting Machine (GBM), XGBoost, Random Forest, Support Vector Regression (SVR), Multilayer Perceptron (MLP), and Linear Regression. Results show GBM achieves the highest predictive performance (R² = 0.82). Notably, code length, question score, and answer count exhibit significant positive correlations with the number of code violations—indicating that higher values in these metrics correspond to lower code quality. This work establishes a novel, quantifiable paradigm for code quality modeling and provides empirical evidence and a risk-alert mechanism to guide developers in avoiding potentially hazardous code reuse.

Technology Category

Application Category

📝 Abstract
Practitioners are increasingly dependent on publicly available resources for supporting their knowledge needs during software development. This has thus caused a spotlight to be paced on these resources, where researchers have reported mixed outcomes around the quality of these resources. Stack Overflow, in particular, has been studied extensively, with evidence showing that code resources on this platform can be of poor quality at times. Limited research has explored the variables or factors that predict code quality on Stack Overflow, but instead has focused on ranking content, identifying defects and predicting future content. In many instances approaches used for prediction are not evaluated to identify the best techniques. Contextualizing the Stack Overflow code quality problem as regression-based, we examined the variables that predict Stack Overflow (Java) code quality, and the regression approach that provides the best predictive power. Six approaches were considered in our evaluation, where Gradient Boosting Machine (GBM) stood out. In addition, longer Stack Overflow code tended to have more code violations, questions that were scored higher also attracted more views and the more answers that are added to questions on Stack Overflow the more errors were typically observed in the code that was provided. Outcomes here point to the value of the GBM ensemble learning mechanism, and the need for the practitioner community to be prudent when contributing and reusing Stack Overflow Java coding resource.
Problem

Research questions and friction points this paper is trying to address.

Identifying variables predicting Stack Overflow Java code quality
Evaluating regression approaches for optimal code quality prediction
Assessing impact of post features on code violation frequency
Innovation

Methods, ideas, or system contributions that make the work stand out.

GBM ensemble learning predicts code quality best
Longer Stack Overflow code has more violations
Higher scored questions attract more views
🔎 Similar Papers
No similar papers found.