đ¤ AI Summary
This study systematically identifies key issues inducing code technical debt (TD) in machine learning (ML) workflows. To address gaps in TD understanding specific to ML, we conduct a phase-wise analysis of ML processesâencompassing data acquisition, preprocessing, model development, and evaluationâcomplemented by a focus group with nine experienced ML engineers and a dual-dimension assessment of issue relevance and occurrence frequency. We thereby construct and empirically validate the first comprehensive catalog of 30 ML-specific code TD triggers. Results reveal the data preprocessing phase as the highest-risk stage, contributing 14 high-relevance triggersâpredominantly âpatchworkâ practices such as ad-hoc missing-value imputation, outlier handling, and heuristic feature selection. Based on these findings, we propose a novel TD taxonomy tailored to ML workflows and publicly release a refined, high-impact catalog of 24 TD triggers. This work significantly enhances practitionersâ awareness of ML code maintainability challenges and long-term operational costs.
đ Abstract
[Context] Technical debt (TD) in machine learning (ML) systems, much like its counterpart in software engineering (SE), holds the potential to lead to future rework, posing risks to productivity, quality, and team morale. Despite growing attention to TD in SE, the understanding of ML-specific code-related TD remains underexplored. [Objective] This paper aims to identify and discuss the relevance of code-related issues that lead to TD in ML code throughout the ML workflow. [Method] The study first compiled a list of 34 potential issues contributing to TD in ML code by examining the phases of the ML workflow, their typical associated activities, and problem types. This list was refined through two focus group sessions involving nine experienced ML professionals, where each issue was assessed based on its occurrence contributing to TD in ML code and its relevance. [Results] The list of issues contributing to TD in the source code of ML systems was refined from 34 to 30, with 24 of these issues considered highly relevant. The data pre-processing phase was the most critical, with 14 issues considered highly relevant. Shortcuts in code related to typical pre-processing tasks (e.g., handling missing values, outliers, inconsistencies, scaling, rebalancing, and feature selection) often result in"patch fixes"rather than sustainable solutions, leading to the accumulation of TD and increasing maintenance costs. Relevant issues were also found in the data collection, model creation and training, and model evaluation phases. [Conclusion] We have made the final list of issues available to the community and believe it will help raise awareness about issues that need to be addressed throughout the ML workflow to reduce TD and improve the maintainability of ML code.