Kalman Linear Attention: Parallel Bayesian Filtering For Efficient Language Modelling and State Tracking

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited expressivity and insufficient robustness in state tracking exhibited by existing state-space language models such as Mamba and GLA. The authors reformulate sequence modeling as a probabilistic inference problem and propose the Kalman Linear Attention layer, which leverages an information-form Kalman filter to enable parallelized Bayesian updates. By integrating associative scan operations with linear attention mechanisms, the method explicitly models state uncertainty and enhances nonlinear representational capacity while preserving linear computational complexity. Notably, this approach achieves the first parallelizable training of Kalman filters within a deep learning architecture. Empirical results demonstrate that the proposed model matches or exceeds the performance of current state-space models and gated linear attention methods on both language modeling and state-tracking benchmarks.

Technology Category

Application Category

📝 Abstract
State-space language models such as Mamba and gated linear attention (GLA) offer efficient alternatives to transformers due to their linear complexity and parallel training, but often lack the expressivity and robust state-tracking needed for complex reasoning. We address these limitations by reframing sequence modelling through a probabilistic lens, using Bayesian filters as a core primitive. While classical filters such as Kalman filters provide principled state estimation and uncertainty tracking, they are typically viewed as inherently sequential. We show that reparameterising the Kalman filter in information form enables its updates to be computed via an associative scan, allowing efficient parallel training. Building on this insight, we introduce the Kalman Linear Attention (KLA) layer, a neural sequence-modelling primitive that performs time-parallel probabilistic inference while maintaining explicit belief-state uncertainty. KLA offers strictly more expressive nonlinear updates and gating than GLA variants while retaining their computational advantages. On language modelling tasks, KLA matches or outperforms modern SSMs and GLAs across representative discrete token-manipulation and state-tracking benchmarks.
Problem

Research questions and friction points this paper is trying to address.

state-space models
language modelling
state tracking
expressivity
Bayesian filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kalman filter
parallel Bayesian filtering
linear attention
state-space models
uncertainty-aware modeling
🔎 Similar Papers
No similar papers found.