nGPT: Normalized Transformer with Representation Learning on the Hypersphere

📅 2024-10-01
🏛️ arXiv.org
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
To address the slow convergence and low parameter efficiency of standard Transformers, this paper proposes nGPT—the first fully unit-norm-normalized Transformer architecture. It constrains all modules—including token embeddings, attention weights, MLP parameters, and hidden states—to the unit hypersphere, thereby recasting forward propagation as a geometric displacement process on the hypersphere. To realize this, we design spherical attention, manifold-aware MLPs, and a unit-norm optimization strategy, establishing an end-to-end hyperspherical representation learning paradigm. Experiments demonstrate that nGPT achieves comparable accuracy with 4–20× fewer training steps, significantly improving both convergence speed and parameter efficiency. This work introduces a novel geometric modeling framework for Transformers, shifting emphasis from Euclidean to hyperspherical geometry in deep sequence modeling.

Technology Category

Application Category

📝 Abstract
We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.
Problem

Research questions and friction points this paper is trying to address.

Proposes normalized Transformer (nGPT) for hypersphere representation learning
Ensures all vectors are unit norm normalized in architecture
Achieves faster learning with reduced training steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unit norm normalized vectors in Transformer architecture
Representation learning on hypersphere surface
Faster training with reduced steps
🔎 Similar Papers
No similar papers found.
I
I. Loshchilov
NVIDIA
C
Cheng-Ping Hsieh
NVIDIA
S
Simeng Sun
NVIDIA
Boris Ginsburg
Boris Ginsburg
NVIDIA
Deep LearningSpeech RecognitionSpeech Synthesis