🤖 AI Summary
This study addresses the non-deterministic outputs of large language models on GPUs, which persist even when deterministic modes are enabled due to variations in floating-point operation ordering. The work presents the first systematic analysis of token-level probability distribution shifts, revealing that probabilities in the 0.1–0.9 range are significantly affected by non-determinism, whereas extreme probabilities near 0 or 1 remain relatively stable. Through floating-point precision analysis under GPU execution environments and comparative experiments across multiple models, the authors propose a novel method that requires only a single inference pass to assess the impact of non-determinism. This approach offers both a new perspective and a practical tool for quantifying and understanding non-deterministic behavior in large model inference.
📝 Abstract
The execution of Large Language Models (LLMs) has been shown to produce nondeterministic results when run on Graphics Processing Units (GPUs), even when they are configured to produce deterministic results. This is due to the finite precision effects of the arithmetic operations, which depend on the order in which they are executed. This order, in turn, depends on the processes that are running concurrently on the GPU. Previous studies have focused on the impact of nondeterminism on the text generated by the LLMs or on proposing mechanisms to achieve deterministic execution. This work takes a closer look at nondeterminism by analyzing the variations on the token probabilities, not on the generated text. Interestingly, all the models evaluated have similar results in both the trends and the actual values of the variations of the probabilities. In particular, the results show that the effects of nondeterminism are significant for token probabilities that are in the range of 0.1 to 0.9, while they are much smaller when the probabilities are close to 0 or 1. This has significant implications for our understanding of nondeterminism. The first is that nondeterminism will likely have a non-negligible impact on generated text when the temperature is not zero, as it introduces significant variations in the token probabilities except when they are close to 0 or 1. Secondly, it suggests that all models have similar non deterministic variations at the token probability level. Therefore, different variations in the performance of the generated text, for example, when measuring accuracy on a benchmark, seem to come from different token probabilities or response lengths. A third implication is that we may be able to estimate the impact of nondeterminism by running a single inference and analyzing the token level probabilities, instead of having to run the same inference many times.