π€ AI Summary
Defect detection in deep learning frameworks suffers from low diversity and poor validity of generated models, as well as limited practical utility of identified bugs. To address this, we propose DevMuTβthe first framework testing method explicitly incorporating developer expertise. DevMuT designs mutation operators and constraint mechanisms grounded in real-world development practices, covering both training and inference stages to generate highly realistic and syntactically valid model variants. Evaluated on PyTorch, JAX, and MindSpore using 29 industrial-scale models, DevMuT improves model diversity by 71.68% and validity rate by 28.20%. It uncovered 117 defects, of which 63 were confirmed, 24 fixed, and 8 classified as high-impact; the methodology has been integrated into the MindSpore community. Our core contribution lies in leveraging developer knowledge to guide mutation, substantially enhancing the practical relevance and real-world coverage of defect detection.
π Abstract
Deep learning (DL) frameworks are the fundamental infrastructure for various DL applications. Framework defects can profoundly cause disastrous accidents, thus requiring sufficient detection. In previous studies, researchers adopt DL models as test inputs combined with mutation to generate more diverse models. Though these studies demonstrate promising results, most detected defects are considered trivial (i.e., either treated as edge cases or ignored by the developers). To identify important bugs that matter to developers, we propose a novel DL framework testing method DevMuT, which generates models by adopting mutation operators and constraints derived from developer expertise. DevMuT simulates developers'common operations in development and detects more diverse defects within more stages of the DL model lifecycle (e.g., model training and inference). We evaluate the performance of DevMuT on three widely used DL frameworks (i.e., PyTorch, JAX, and Mind- Spore) with 29 DL models from nine types of industry tasks. The experiment results show that DevMuT outperforms state-of-the-art baselines: it can achieve at least 71.68% improvement on average in the diversity of generated models and 28.20% improvement on average in the legal rates of generated models. Moreover, DevMuT detects 117 defects, 63 of which are confirmed, 24 are fixed, and eight are of high value confirmed by developers. Finally, DevMuT has been deployed in the MindSpore community since December 2023. These demonstrate the effectiveness of DevMuT in detecting defects that are close to the real scenes and are of concern to developers.