A Comprehensive Multi-Vocal Empirical Study of ML Cloud Service Misuses

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Misuse of machine learning (ML) cloud services is widespread in practice, degrading system quality and maintainability, yet lacks a unified definition or taxonomy. Method: We conduct a multi-source empirical study—integrating a systematic literature review, cloud provider documentation analysis, code mining from 377 GitHub projects, and surveys plus thematic coding with 50 practitioners—to systematically characterize such misuse. Contribution/Results: We introduce the first comprehensive taxonomy of 20 ML cloud service misuses, 16 of which are newly identified. We propose the “multi-voice empirical paradigm,” unifying academic, industrial, open-source, and survey evidence. Our findings reveal that misuses frequently occur in real-world deployments, primarily due to misconceptions about service capabilities and inadequate documentation. This work establishes the first scalable, empirically grounded foundation for ML engineering education, best-practice guidelines, and automated detection tools.

Technology Category

Application Category

📝 Abstract
Machine Learning (ML) models are widely used across various domains, including medical diagnostics and autonomous driving. To support this growth, cloud providers offer ML services to ease the integration of ML components in software systems. The evolving business requirements and the popularity of ML services have led practitioners of all skill levels to implement, and maintain ML service-based systems. However, they may not always adhere to optimal design and usage practices for ML cloud services, resulting in common misuse which could significantly degrade the quality of ML service-based systems and adversely affect their maintenance and evolution. Though much research has been conducted on ML service misuse, a consistent terminology and specification for these misuses remain absent. We therefore conduct in this paper a comprehensive, multi-vocal empirical study exploring the prevalence of ML cloud service misuses in practice. We propose a catalog of 20 ML cloud service misuses, most of which have not been studied in prior research. To achieve this, we conducted a) a systematic literature review of studies on ML misuses, b) a gray literature review of the official documentation provided by major cloud providers, c) an empirical analysis of a curated set of 377 ML service-based systems on GitHub, and d) a survey with 50 ML practitioners. Our results show that ML service misuses are common in both open-source projects and industry, often stemming from a lack of understanding of service capabilities, and insufficient documentation. This emphasizes the importance of ongoing education in best practices for ML services, which is the focus of this paper, while also highlighting the need for tools to automatically detect and refactor ML misuses.
Problem

Research questions and friction points this paper is trying to address.

Identifies common misuses of ML cloud services in software systems.
Proposes a catalog of 20 previously unstudied ML cloud service misuses.
Highlights the need for education and tools to detect ML service misuses.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Catalog of 20 ML cloud service misuses
Multi-vocal empirical study methodology
Tools for detecting and refactoring ML misuses
🔎 Similar Papers
No similar papers found.
H
Hadil Ben Amor
École de Technologie Supérieure, Canada
Manel Abdellatif
Manel Abdellatif
Professor - École de Technologie Supérieure, Montreal, Canada
Software EvolutionService ComputingMachine LearningTrustworthy AI
T
Taher Ghaleb
Trent University, Canada