🤖 AI Summary
Classical nonlinear filtering theory lacks integration with modern Transformer architectures, and existing sequence modeling approaches suffer from insufficient interpretability of underlying dynamical mechanisms in probabilistic modeling and sequential inference.
Method: We model each Transformer layer’s activations as conditional distribution surrogates of the posterior measure under a hidden Markov model (HMM), interpret self-attention as a nonlinear Bayesian predictor, and formalize inter-layer transformations as fixed-point iterations on the space of probability measures.
Contribution/Results: We establish, for the first time, a rigorous correspondence between Transformer forward propagation and nonlinear filtering—specifically particle filtering and Gaussian filtering—deriving explicit fixed-point update formulas in the HMM special case. This framework endows attention mechanisms with a principled probabilistic semantics and provides a theoretical foundation for designing interpretable, robust inference architectures grounded in stochastic filtering theory.
📝 Abstract
In the 1940s, Wiener introduced a linear predictor, where the future prediction is computed by linearly combining the past data. A transformer generalizes this idea: it is a nonlinear predictor where the next-token prediction is computed by nonlinearly combining the past tokens. In this essay, we present a probabilistic model that interprets transformer signals as surrogates of conditional measures, and layer operations as fixed-point updates. An explicit form of the fixed-point update is described for the special case when the probabilistic model is a hidden Markov model (HMM). In part, this paper is in an attempt to bridge the classical nonlinear filtering theory with modern inference architectures.