π€ AI Summary
This work addresses the critical challenge of GPU memory errors in deep learning frameworks, which can lead to system crashes or security vulnerabilities and demand efficient detection mechanisms. The authors propose a novel approach that integrates formal constraint modeling with fuzz testing: operator parameters are encoded as logical constraints, and a solver is employed to generate test cases that precisely trigger boundary behaviors, thereby systematically exposing memory defects in GPU kernels. This study represents the first integration of formal methods with fuzzing in this context, substantially enhancing both detection efficiency and coverage. Empirical evaluation across PyTorch, TensorFlow, and PaddlePaddle demonstrates the methodβs effectiveness and practicality, uncovering 13 previously unknown memory bugs.
π Abstract
GPU memory errors are a critical threat to deep learning (DL) frameworks, leading to crashes or even security issues. We introduce GPU-Fuzz, a fuzzer locating these issues efficiently by modeling operator parameters as formal constraints. GPU-Fuzz utilizes a constraint solver to generate test cases that systematically probe error-prone boundary conditions in GPU kernels. Applied to PyTorch, TensorFlow, and PaddlePaddle, we uncovered 13 unknown bugs, demonstrating the effectiveness of GPU-Fuzz in finding memory errors.