Geometric and Dynamic Scaling in Deep Transformers

📅 2026-01-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation phenomena—such as representational redundancy, rank collapse, and semantic manifold drift—that commonly arise in deep Transformers as depth increases. For the first time, these issues are attributed to the failure of geometric structure preservation. The authors propose a unified geometric framework grounded in two orthogonal principles: manifold-constrained hyperconnectivity and depth-wise Delta learning, which decouple the direction and sign of residual updates. Additionally, they introduce a non-monotonic, data-dependent dynamic feature update mechanism along with tangent-space directional constraints. The resulting Manifold-Geometric Transformer (MGT) effectively mitigates rank collapse even beyond 100 layers, demonstrating that geometric structural stability is essential for successful depth scaling in Transformer architectures.

Technology Category

Application Category

📝 Abstract
Despite their empirical success, pushing Transformer architectures to extreme depth often leads to a paradoxical failure: representations become increasingly redundant, lose rank, and ultimately collapse. Existing explanations largely attribute this phenomenon to optimization instability or vanishing gradients, yet such accounts fail to explain why collapse persists even under modern normalization and initialization schemes. In this paper, we argue that the collapse of deep Transformers is fundamentally a geometric problem. Standard residual updates implicitly assume that feature accumulation is always beneficial, but offer no mechanism to constrain update directions or to erase outdated information. As depth increases, this leads to systematic drift off the semantic manifold and monotonic feature accumulation, causing representational degeneracy. We propose a unified geometric framework that addresses these failures through two orthogonal principles. First, manifold-constrained hyper-connections restrict residual updates to valid local tangent directions, preventing uncontrolled manifold drift. Second, deep delta learning introduces data-dependent, non-monotonic updates that enable reflection and erasure of redundant features rather than their unconditional accumulation. Together, these mechanisms decouple the direction and sign of feature updates, yielding a stable geometric evolution across depth. We term the resulting architecture the Manifold-Geometric Transformer (MGT). Our analysis predicts that enforcing geometric validity while allowing dynamic erasure is essential for avoiding rank collapse in ultra-deep networks. We outline an evaluation protocol for Transformers exceeding 100 layers to test the hypothesis that geometry, rather than depth itself, is the key limiting factor in deep representation learning.
Problem

Research questions and friction points this paper is trying to address.

deep Transformers
representation collapse
geometric drift
rank degeneracy
ultra-deep networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

manifold-constrained hyper-connections
deep delta learning
geometric stability
representational degeneracy
non-monotonic updates
🔎 Similar Papers
No similar papers found.