Sinkhorn doubly stochastic attention rank decay analysis

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of rank and entropy collapse in standard row-stochastic attention mechanisms within deep Transformers, which leads to representational degradation. To mitigate this, the authors propose a doubly stochastic attention constructed via the Sinkhorn algorithm, augmented with skip connections to preserve representation diversity. The study provides the first theoretical analysis showing that, even after Sinkhorn normalization, the rank of pure self-attention still decays doubly exponentially toward one, albeit at a significantly slower rate than Softmax-based attention. Empirical evaluations on sentiment analysis and image classification tasks demonstrate that the proposed method more effectively maintains the rank of attention matrices and enhances model performance, with corresponding theoretical bounds established for the decay rate.
📝 Abstract
The self-attention mechanism is central to the success of Transformer architectures. However, standard row-stochastic attention has been shown to suffer from significant signal degradation across layers. In particular, it can induce rank collapse, resulting in increasingly uniform token representations, as well as entropy collapse, characterized by highly concentrated attention distributions. Recent work has highlighted the benefits of doubly stochastic attention as a form of entropy regularization, promoting a more balanced attention distribution and leading to improved empirical performance. In this paper, we study rank collapse across network depth and show that doubly stochastic attention matrices normalized with Sinkhorn algorithm preserve rank more effectively than standard Softmax row-stochastic ones. As previously shown for Softmax, skip connections are crucial to mitigate rank collapse. We empirically validate this phenomenon on both sentiment analysis and image classification tasks. Moreover, we derive a theoretical bound for the pure self-attention rank decay when using Sinkhorn normalization and find that rank decays to one doubly exponentially with depth, a phenomenon that has already been shown for Softmax.
Problem

Research questions and friction points this paper is trying to address.

rank collapse
entropy collapse
self-attention
doubly stochastic attention
signal degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sinkhorn normalization
doubly stochastic attention
rank collapse
entropy regularization
self-attention
🔎 Similar Papers
No similar papers found.
M
Michela Lapenna
Department of Physics and Astronomy, University of Bologna, Bologna, Italy
R
Rita Fioresi
Department of Pharmacy and Biotechnologies, University of Bologna, Bologna, Italy
Bahman Gharesifard
Bahman Gharesifard
Professor of Mathematics at Queen's University
Control TheoryOptimizationReinforcement LearningNeural NetworksGeometric Control