Hawkeye: Reproducing GPU-Level Non-Determinism

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of non-determinism in GPU computation, which hinders bit-accurate reproducibility of machine learning models on CPUs and undermines trustworthy auditing. For the first time, it systematically models key sources of non-determinism in NVIDIA Tensor Cores—including rounding modes, subnormal number handling, and non-associative accumulation order—and constructs a framework capable of losslessly reproducing matrix multiplications in FP16, BFP16, and FP8 precisions across Ampere, Hopper, and Lovelace architectures on CPU hardware. The proposed method incurs no additional overhead for model owners and achieves bit-level cross-platform reproducibility in all evaluated scenarios, thereby establishing a foundation for verifiable and efficient machine learning auditing.

Technology Category

Application Category

📝 Abstract
We present Hawkeye, a system for analyzing and reproducing GPU-level arithmetic operations. Using our framework, anyone can re-execute on a CPU the exact matrix multiplication operations underlying a machine learning model training or inference workflow that was executed on an NVIDIA GPU, without any precision loss. This is in stark contrast to prior approaches to verifiable machine learning, which either introduce significant computation overhead to the original model owner, or suffer from non-robustness and quality degradation. The main technical contribution of Hawkeye is a systematic sequence of carefully crafted tests that study rounding direction, subnormal number handling, and order of (non-associative) accumulation during matrix multiplication on NVIDIA's Tensor Cores. We test and evaluate our framework on multiple NVIDIA GPU architectures ( Ampere, Hopper, and Lovelace) and precision types (FP16, BFP16, FP8). In all test cases, Hawkeye enables perfect reproduction of matrix multiplication on a CPU, paving the way for efficient and trustworthy third-party auditing of ML model training and inference.
Problem

Research questions and friction points this paper is trying to address.

GPU non-determinism
matrix multiplication reproducibility
Tensor Cores
numerical precision
verifiable machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU non-determinism
exact reproduction
Tensor Cores
matrix multiplication
verifiable machine learning
🔎 Similar Papers
No similar papers found.