🤖 AI Summary
This study addresses the challenges posed by data dynamism and experimental iteration in ML system development to traditional agile management. Conducting a systematic mapping study (2008–2024), we employed a hybrid search strategy to identify and analyze 27 primary studies. Methodologically, we synthesized findings into the first comprehensive, lifecycle-spanning agile management framework for ML, comprising eight thematic dimensions—including iterative flexibility, minimum viable model, and ML-specific artifacts—and identified effort estimation inaccuracy as the core managerial bottleneck in ML projects. We further cataloged eight existing agile frameworks and their adaptation patterns for ML contexts. This work fills a critical gap by providing the first systematic review of ML-oriented agile management, establishing a theoretical foundation and classification benchmark for future empirical research. It also highlights the current lack of rigorous validation and underscores the urgent need for industrial-scale effectiveness evaluation.
📝 Abstract
[Context] Machine learning (ML)-enabled systems are present in our society, driving significant digital transformations. The dynamic nature of ML development, characterized by experimental cycles and rapid changes in data, poses challenges to traditional project management. Agile methods, with their flexibility and incremental delivery, seem well-suited to address this dynamism. However, it is unclear how to effectively apply these methods in the context of ML-enabled systems, where challenges require tailored approaches. [Goal] Our goal is to outline the state of the art in agile management for ML-enabled systems. [Method] We conducted a systematic mapping study using a hybrid search strategy that combines database searches with backward and forward snowballing iterations. [Results] Our study identified 27 papers published between 2008 and 2024. From these, we identified eight frameworks and categorized recommendations and practices into eight key themes, such as Iteration Flexibility, Innovative ML-specific Artifacts, and the Minimal Viable Model. The main challenge identified across studies was accurate effort estimation for ML-related tasks. [Conclusion] This study contributes by mapping the state of the art and identifying open gaps in the field. While relevant work exists, more robust empirical evaluation is still needed to validate these contributions.