🤖 AI Summary
Transformer-based large language models face memory and computational bottlenecks in long-context generation due to the quadratic complexity of softmax attention and explosive KV cache growth. Lizard addresses this by introducing a flexible subquadratic architecture that efficiently linearizes pre-trained LLMs: it integrates gated linear attention—enabling global context compression and adaptive memory control—with a meta-memory-augmented sliding window mechanism—balancing long-range dependencies and fine-grained local details—thereby supporting constant-memory inference and strong length generalization. Its hardware-aware training algorithm further enhances efficiency. Experiments show that Lizard nearly perfectly recovers teacher performance on standard language modeling, outperforms prior linearization methods by 18 points on 5-shot MMLU, and achieves significant gains on associative memory tasks. Crucially, Lizard is the first approach to enable structurally flexible, scalable subquadratic long-context modeling without compromising generation quality.
📝 Abstract
We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Transformer-based LLMs face significant memory and computational bottlenecks as context lengths increase, due to the quadratic complexity of softmax attention and the growing key-value (KV) cache. Lizard addresses these limitations by introducing a subquadratic attention mechanism that closely approximates softmax attention while preserving the output quality. Unlike previous linearization methods, which are often limited by fixed model structures and therefore exclude gating mechanisms, Lizard incorporates a gating module inspired by recent state-of-the-art linear models. This enables adaptive memory control, supports constant-memory inference, offers strong length generalization, and allows more flexible model design. Lizard combines gated linear attention for global context compression with sliding window attention enhanced by meta memory, forming a hybrid mechanism that captures both long-range dependencies and fine-grained local interactions. Moreover, we introduce a hardware-aware algorithm that accelerates the training speed of our models. Extensive experiments show that Lizard achieves near-lossless recovery of the teacher model's performance across standard language modeling tasks, while significantly outperforming previous linearization methods. On the 5-shot MMLU benchmark, Lizard improves over prior models by 18 points and shows significant improvements on associative recall tasks.