GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks

πŸ“… 2026-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the critical challenge of GPU memory errors in deep learning frameworks, which can lead to system crashes or security vulnerabilities and demand efficient detection mechanisms. The authors propose a novel approach that integrates formal constraint modeling with fuzz testing: operator parameters are encoded as logical constraints, and a solver is employed to generate test cases that precisely trigger boundary behaviors, thereby systematically exposing memory defects in GPU kernels. This study represents the first integration of formal methods with fuzzing in this context, substantially enhancing both detection efficiency and coverage. Empirical evaluation across PyTorch, TensorFlow, and PaddlePaddle demonstrates the method’s effectiveness and practicality, uncovering 13 previously unknown memory bugs.

Technology Category

Application Category

πŸ“ Abstract
GPU memory errors are a critical threat to deep learning (DL) frameworks, leading to crashes or even security issues. We introduce GPU-Fuzz, a fuzzer locating these issues efficiently by modeling operator parameters as formal constraints. GPU-Fuzz utilizes a constraint solver to generate test cases that systematically probe error-prone boundary conditions in GPU kernels. Applied to PyTorch, TensorFlow, and PaddlePaddle, we uncovered 13 unknown bugs, demonstrating the effectiveness of GPU-Fuzz in finding memory errors.
Problem

Research questions and friction points this paper is trying to address.

GPU memory errors
deep learning frameworks
memory safety
bug detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-Fuzz
memory error detection
constraint-based fuzzing
deep learning frameworks
GPU kernel testing
πŸ”Ž Similar Papers
No similar papers found.