Mock Deep Testing: Toward Separate Development of Data and Models for Deep Learning

πŸ“… 2025-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Deep learning (DL) systems exhibit strong coupling between data and models, impeding conventional unit testing; existing software engineering research has not systematically addressed this challenge. This paper proposes *Simulated Deep Testing*, a novel methodology that enables unit-level independent testing of DL data preparation and model design through workflow decoupling, modular architecture, and dependency mocking. Our key contributions are: (1) the first mock-based unit testing paradigm specifically designed for DL applications; (2) principled guidelines for decoupling DL workflows; and (3) KUnitβ€”the first mock-testing framework supporting Keras. Evaluated on 50 real-world DL projects, KUnit detected 63 defects; a user study confirmed that developers successfully resolved all identified issues using KUnit, demonstrating its effectiveness in early defect detection and lightweight component validation.

Technology Category

Application Category

πŸ“ Abstract
While deep learning (DL) has permeated, and become an integral component of many critical software systems, today software engineering research hasn't explored how to separately test data and models that are integral for DL approaches to work effectively. The main challenge in independently testing these components arises from the tight dependency between data and models. This research explores this gap, introducing our methodology of mock deep testing for unit testing of DL applications. To enable unit testing, we introduce a design paradigm that decomposes the workflow into distinct, manageable components, minimizes sequential dependencies, and modularizes key stages of the DL. For unit testing these components, we propose modeling their dependencies using mocks. This modular approach facilitates independent development and testing of the components, ensuring comprehensive quality assurance throughout the development process. We have developed KUnit, a framework for enabling mock deep testing for the Keras library. We empirically evaluated KUnit to determine the effectiveness of mocks. Our assessment of 50 DL programs obtained from Stack Overflow and GitHub shows that mocks effectively identified 10 issues in the data preparation stage and 53 issues in the model design stage. We also conducted a user study with 36 participants using KUnit to perceive the effectiveness of our approach. Participants using KUnit successfully resolved 25 issues in the data preparation stage and 38 issues in the model design stage. Our findings highlight that mock objects provide a lightweight emulation of the dependencies for unit testing, facilitating early bug detection. Lastly, to evaluate the usability of KUnit, we conducted a post-study survey. The results reveal that KUnit is helpful to DL application developers, enabling them to independently test each component effectively in different stages.
Problem

Research questions and friction points this paper is trying to address.

Separate testing of data and models
Minimize dependencies in deep learning workflows
Enable unit testing with mock objects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mock deep testing methodology introduced
Modularized DL workflow for testing
KUnit framework for Keras library developed
πŸ”Ž Similar Papers
No similar papers found.