π€ AI Summary
To address the high computational cost and slow inference of Transformer-based large language models, this paper proposes an efficient knowledge distillation framework tailored for linear RNNs (Mamba). Our method compresses a pretrained Transformer (e.g., Llama3-8B) into a hybrid architecture where attention layers constitute only 25% of the total layers, and introduces a novel distillation paradigm that reuses the linear projection weights from the original attention modules. We further design a hardware-aware speculative decoding algorithm specifically optimized for Mamba to improve throughput, and enhance length extrapolation beyond natural training sequence lengths. Evaluated on AlpacaEval 2, our distilled model achieves a 29.61% win rate against GPT-4; on MT-Bench, it scores 7.35βsignificantly outperforming open-source linear RNN baselines of comparable parameter count. In needle-in-a-haystack retrieval, it attains near-perfect accuracy (β100%) on sequences 20Γ longer than standard context windows.
π Abstract
Linear RNN architectures, like Mamba, can be competitive with Transformer models in language modeling while having advantageous deployment characteristics. Given the focus on training large-scale Transformer models, we consider the challenge of converting these pretrained models for deployment. We demonstrate that it is feasible to distill large Transformers into linear RNNs by reusing the linear projection weights from attention layers with academic GPU resources. The resulting hybrid model, which incorporates a quarter of the attention layers, achieves performance comparable to the original Transformer in chat benchmarks and outperforms open-source hybrid Mamba models trained from scratch with trillions of tokens in both chat benchmarks and general benchmarks. Moreover, we introduce a hardware-aware speculative decoding algorithm that accelerates the inference speed of Mamba and hybrid models. Overall we show how, with limited computation resources, we can remove many of the original attention layers and generate from the resulting model more efficiently. Our top-performing model, distilled from Llama3-8B-Instruct, achieves a 29.61 length-controlled win rate on AlpacaEval 2 against GPT-4 and 7.35 on MT-Bench, surpassing the best 8B scale instruction-tuned linear RNN model. We also find that the distilled model has natural length extrapolation, showing almost perfect accuracy in the needle-in-a-haystack test at 20x the distillation length. Code and pre-trained checkpoints are open-sourced at https://github.com/jxiw/MambaInLlama and https://github.com/itsdaniele/speculative_mamba.