🤖 AI Summary
This paper addresses the parallelization challenge of linear attention in Linear Transformers. We propose a novel parallelization paradigm based on flow discretization. Our core method constructs a matrix-valued state-space model (SSM), recasting block-wise computation as discrete approximations of system dynamics flows—thereby decoupling sequential modeling from hardware implementation constraints. We establish, for the first time, a rigorous mathematical connection between linear attention and rough path theory, and introduce low-rank dynamical modeling. Based on this framework, we design two new algorithms: one simplifies and generalizes existing hardware-efficient approaches; the other, inspired by rough path theory, achieves strictly lower computational complexity within DeltaNet’s generalized low-rank setting—while maintaining theoretical provability and hardware efficiency.
📝 Abstract
We present a theoretical framework for analyzing linear attention models through matrix-valued state space models (SSMs). Our approach, Parallel Flows, provides a perspective that systematically decouples temporal dynamics from implementation constraints, enabling independent analysis of critical algorithmic components: chunking, parallelization, and information aggregation. Central to this framework is the reinterpretation of chunking procedures as computations of the flows governing system dynamics. This connection establishes a bridge to mathematical tools from rough path theory, opening the door to new insights into sequence modeling architectures. As a concrete application, we analyze DeltaNet in a generalized low-rank setting motivated by recent theoretical advances. Our methods allow us to design simple, streamlined generalizations of hardware-efficient algorithms present in the literature, and to provide completely different ones, inspired by rough paths techniques, with provably lower complexity. This dual contribution demonstrates how principled theoretical analysis can both explain existing practical methods and inspire fundamentally new computational approaches.