Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether autoregressive large language models (LLMs) learn consistent probability distributions under different token ordering schemes—forward, backward, and random permutations. Method: Theoretically, we establish the first rigorous proof that sequence perplexity is invariant to any factorization order. Empirically, we propose a multi-order tokenization retraining framework based on GPT-2 and integrate attention attribution analysis to systematically quantify distributional deviations. Contribution/Results: All permutations violate theoretical invariance, with random ordering exhibiting the largest deviation; forward and backward models are highly similar but not equivalent. Deviations primarily stem from positional encoding and locality bias in self-attention mechanisms. These findings expose critical limitations in standard LLM evaluation paradigms and provide both a novel theoretical foundation and empirical benchmarks for assessing probabilistic consistency and reliability in LLMs.

Technology Category

Application Category

📝 Abstract
Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability distribution, sequence perplexity is invariant under any factorization, including forward, backward, or arbitrary permutations. This result establishes a rigorous theoretical foundation for studying how LLMs learn from data and defines principled protocols for empirical evaluation. Applying these protocols, we show that prior studies examining ordering effects suffer from critical methodological flaws. We retrain GPT-2 models across forward, backward, and arbitrary permuted orders on scientific text. We find systematic deviations from theoretical invariance across all orderings with arbitrary permutations strongly deviating from both forward and backward models, which largely (but not completely) agreed with one another. Deviations were traceable to differences in self-attention, reflecting positional and locality biases in processing. Our theoretical and empirical results provide novel avenues for understanding positional biases in LLMs and suggest methods for detecting when LLMs' probability distributions are inconsistent and therefore untrustworthy.
Problem

Research questions and friction points this paper is trying to address.

Investigates if LLMs learn consistent probability distributions across token orders
Identifies methodological flaws in prior studies on ordering effects
Reveals positional biases in LLMs causing probability distribution deviations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves sequence perplexity invariance under factorization
Retrains GPT-2 models with varied token orders
Identifies self-attention biases causing distribution deviations
🔎 Similar Papers
No similar papers found.