UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of modeling long texts due to limited context windows in large language models (e.g., Llama2-7B-chat supports only 4K tokens), this paper proposes an unbiased incremental optimization framework. It reformulates memory-augmented Transformers as fully connected RNNs, integrating weight-shared encoder-decoder architectures with a modified truncated backpropagation through time (TBPTT) training paradigm. Crucially, it introduces the first unbiased gradient update mechanism, eliminating truncation bias inherent in conventional TBPTT. The method incurs only a 2% parameter overhead yet extends effective context length from 4K to 100K tokens, with inference computational complexity scaling nearly linearly. Empirical evaluations demonstrate substantial improvements on long-document understanding and generation tasks. This work establishes a novel, efficient, and scalable paradigm for modeling long-range dependencies in transformer-based language models.

Technology Category

Application Category

📝 Abstract
Managing long texts is challenging for large language models (LLMs) due to limited context window sizes. This study introduces UIO-LLMs, an unbiased incremental optimization approach for memory-enhanced transformers under long-context settings. We initially conceptualize the process as a streamlined encoder-decoder framework where the weights-shared encoder and decoder respectively encapsulate a context segment into memories and leverage these memories to predict outputs of the subsequent segment. Subsequently, by treating our memory-enhanced transformers as fully-connected recurrent neural networks (RNNs), we refine the training process using the Truncated Backpropagation Through Time (TBPTT) algorithm, which incorporates innovative incremental optimization techniques. These techniques not only diminish time complexity but also address the bias in gradient computation through an unbiased optimization process. UIO-LLMs successfully handle long context, such as extending the context window of Llama2-7b-chat from 4K to 100K tokens with minimal 2% additional parameters, while keeping the inference cost nearly linear as context length increases.
Problem

Research questions and friction points this paper is trying to address.

Extending context window for LLMs with minimal added parameters
Reducing time complexity in long-context processing
Unbiased gradient computation in incremental optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unbiased incremental optimization for memory-enhanced transformers
Streamlined encoder-decoder framework with shared weights
Truncated Backpropagation Through Time with incremental techniques
🔎 Similar Papers
No similar papers found.
W
Wenhao Li
Xiamen University
Mingbao Lin
Mingbao Lin
Principal Research Scientist, Rakuten
Model Compression(Multimodal) LLMsDiffusion Models
Yunshan Zhong
Yunshan Zhong
Hainan university
S
Shuicheng Yan
Skywork AI
R
Rongrong Ji
Xiamen University