POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training large language models faces significant challenges, including high memory consumption, substantial computational overhead, and poor training stability. This work proposes POET-X, a method that leverages scalable orthogonal equivalence transformations, parameter reparameterization, and efficient matrix operations to dramatically improve memory efficiency and training throughput while preserving model generalization and stability. Notably, POET-X enables the successful pretraining of a billion-parameter-scale large language model on a single NVIDIA H100 GPU—a feat unattainable with standard optimizers such as AdamW under the same hardware constraints due to memory limitations.

Technology Category

Application Category

📝 Abstract
Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantial improvements in throughput and memory efficiency. In our experiments, POET-X enables the pretraining of billion-parameter LLMs on a single Nvidia H100 GPU, and in contrast, standard optimizers such as AdamW run out of memory under the same settings.
Problem

Research questions and friction points this paper is trying to address.

large language models
memory efficiency
orthogonal transformation
training stability
computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

POET-X
orthogonal transformation
memory-efficient training
large language models
reparameterization
🔎 Similar Papers
No similar papers found.