🤖 AI Summary
GPU tensor cores (Volta/Turing/Ampere) exhibit nonstandard mixed-precision arithmetic behavior lacking formal semantic characterization. Method: We propose the first SMT-based formal semantic model capturing their rounding modes, precision configurations, and accumulation order with mathematical precision. Contribution/Results: Our analysis reveals—for the first time—that tensor cores employ nonzero rounding (e.g., round-to-odd) and require three extra carry bits for five-term accumulations. We further demonstrate that nominally superior error-correction algorithms can degrade accuracy on specific inputs. The model enables automated synthesis of discriminative test cases, leading to the validation and correction of multiple prior mischaracterizations. This work establishes a verifiable, portable semantic foundation for hardware simulation, compiler development, and numerical algorithm design—bridging a critical gap between hardware specification and numerical reliability.
📝 Abstract
Many recent computational accelerators provide non-standard (e.g., reduced precision) arithmetic operations to enhance performance for floating-point matrix multiplication. Unfortunately, the properties of these accelerators are not widely understood and lack sufficient descriptions of their behavior. This makes it difficult for tool builders beyond the original vendor to target or simulate the hardware correctly, or for algorithm designers to be confident in their code. To address these gaps, prior studies have probed the behavior of these units with manually crafted tests. Such tests are cumbersome to design, and adapting them as the accelerators evolve requires repeated manual effort. We present a formal model for the tensor cores of Nvidia's Volta, Turing, and Ampere GPUs. We identify specific properties -- rounding mode, precision, and accumulation order -- that drive these cores' behavior. We formalize these properties and then use the formalization to automatically generate discriminating inputs that illustrate differences among machines. Our results confirm many of the findings of previous tensor core studies, but also identify subtle disagreements. In particular, Nvidia's machines do not, as previously reported, use round-to-zero for accumulation, and their 5-term accumulator requires 3 extra carry-out bits for full accuracy. Using our formal model, we analyze two existing algorithms that use half-precision tensor cores to accelerate single-precision multiplication with error correction. Our analysis reveals that the newer algorithm, designed to be more accurate than the first, is actually less accurate for certain inputs.