🤖 AI Summary
This study addresses the lack of systematic understanding regarding self-admitted technical debt (SATD) characteristics and their impact on software quality in machine learning (ML) systems. Method: Leveraging an empirical comparison of 318 open-source ML projects against a matched set of non-ML projects, we applied static annotation mining, manual thematic coding, survival analysis, and multidimensional statistical modeling. Contribution/Results: We find that ML projects exhibit twice the SATD density of non-ML projects, with debt concentrated in data preprocessing and model generation modules; it is introduced earlier and persists longer. SATD constitutes a significantly higher median proportion of source files, and multi-file, low-complexity changes are the primary driver of long-standing debt. We identify high-risk components and establish the first empirically grounded, evolution-aware SATD benchmark for ML software—providing both theoretical insights and practical guidance for sustainable ML system maintenance and evolution.
📝 Abstract
The emergence of open-source ML libraries such as TensorFlow and Google Auto ML has enabled developers to harness state-of-the-art ML algorithms with minimal overhead. However, during this accelerated ML development process, said developers may often make sub-optimal design and implementation decisions, leading to the introduction of technical debt that, if not addressed promptly, can have a significant impact on the quality of the ML-based software. Developers frequently acknowledge these sub-optimal design and development choices through code comments during software development. These comments, which often highlight areas requiring additional work or refinement in the future, are known as self-admitted technical debt (SATD). This paper aims to investigate SATD in ML code by analyzing 318 open-source ML projects across five domains, along with 318 non-ML projects. We detected SATD in source code comments throughout the different project snapshots, conducted a manual analysis of the identified SATD sample to comprehend the nature of technical debt in the ML code, and performed a survival analysis of the SATD to understand the evolution of such debts. We observed: i) Machine learning projects have a median percentage of SATD that is twice the median percentage of SATD in non-machine learning projects. ii) ML pipeline components for data preprocessing and model generation logic are more susceptible to debt than model validation and deployment components. iii) SATDs appear in ML projects earlier in the development process compared to non-ML projects. iv) Long-lasting SATDs are typically introduced during extensive code changes that span multiple files exhibiting low complexity.