The Bayesian Geometry of Transformer Attention

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether Transformers genuinely perform Bayesian inference and clarifies the fundamental distinction between their inferential and memorization capabilities. To this end, we introduce the first controllable, analytically tractable “Bayesian wind tunnel” benchmark—enabling exact posterior validation. Leveraging geometric diagnostics (orthogonal key-basis analysis, query-key alignment, entropy-driven value manifolds) and task modeling (HMMs, bijective elimination), we find that residual streams encode beliefs, feed-forward networks execute posterior updates, and attention implements content-addressable routing—revealing, for the first time, a “frame-precise separation” of Bayesian operations. Experiments show that small-scale Transformers reconstruct Bayesian posteriors with 10⁻³–10⁻⁴ bit precision, whereas同等-capacity MLPs fail completely. We thus establish that attention is a geometrically necessary architectural component for Bayesian inference in Transformers.

Technology Category

Application Category

📝 Abstract
Transformers often appear to perform Bayesian reasoning in context, but verifying this rigorously has been impossible: natural data lack analytic posteriors, and large models conflate reasoning with memorization. We address this by constructing emph{Bayesian wind tunnels} -- controlled environments where the true posterior is known in closed form and memorization is provably impossible. In these settings, small transformers reproduce Bayesian posteriors with mbox{$10^{-3}$--$10^{-4}$} bit accuracy, while capacity-matched MLPs fail by orders of magnitude, establishing a clear architectural separation. Across two tasks -- bijection elimination and Hidden Markov Model (HMM) state tracking -- we find that transformers implement Bayesian inference through a consistent geometric mechanism: residual streams serve as the belief substrate, feed-forward networks perform the posterior update, and attention provides content-addressable routing. Geometric diagnostics reveal orthogonal key bases, progressive query--key alignment, and a low-dimensional value manifold parameterized by posterior entropy. During training this manifold unfurls while attention patterns remain stable, a emph{frame--precision dissociation} predicted by recent gradient analyses. Taken together, these results demonstrate that hierarchical attention realizes Bayesian inference by geometric design, explaining both the necessity of attention and the failure of flat architectures. Bayesian wind tunnels provide a foundation for mechanistically connecting small, verifiable systems to reasoning phenomena observed in large language models.
Problem

Research questions and friction points this paper is trying to address.

Construct controlled environments to verify Bayesian reasoning in transformers
Identify geometric mechanisms enabling Bayesian inference in transformer architecture
Connect small verifiable systems to reasoning in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian wind tunnels for controlled posterior verification
Transformers implement Bayesian inference via geometric mechanisms
Attention enables content-addressable routing in belief updates
🔎 Similar Papers
No similar papers found.