SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator

📅 2024-12-16
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
To address the slow inference speed of large language models (LLMs), excessive KV cache overhead, and over-attention to uninformative separator tokens, this paper proposes a “segment-to-separator” semantic compression paradigm. We first identify—via attention analysis—that separator tokens dominantly carry contextual information in self-attention mechanisms. Leveraging this insight, we design a lossless, training-free, dynamic token compression mechanism that condenses each semantic segment into a single separator token. Our method integrates fine-grained attention profiling, adaptive compression heuristics, and custom CUDA kernels, ensuring compatibility with mainstream architectures including Llama-3. Experiments demonstrate over 50% KV cache reduction on GSM8K-CoT without performance degradation; moreover, it enables stable streaming inference over sequences exceeding 4 million tokens while preserving language modeling fidelity.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have exhibited exceptional performance across a spectrum of natural language processing tasks. However, their substantial sizes pose considerable challenges, particularly in computational demands and inference speed, due to their quadratic complexity. In this work, we have identified a key pattern: certain seemingly meaningless special tokens (i.e., separators) contribute disproportionately to attention scores compared to semantically meaningful tokens. This observation suggests that information of the segments between these separator tokens can be effectively condensed into the separator tokens themselves without significant information loss. Guided by this insight, we introduce SepLLM, a plug-and-play framework that accelerates inference by compressing these segments and eliminating redundant tokens. Additionally, we implement efficient kernels for training acceleration. Experimental results across training-free, training-from-scratch, and post-training settings demonstrate SepLLM's effectiveness. Notably, using the Llama-3-8B backbone, SepLLM achieves over 50% reduction in KV cache on the GSM8K-CoT benchmark while maintaining comparable performance. Furthermore, in streaming settings, SepLLM effectively processes sequences of up to 4 million tokens or more while maintaining consistent language modeling capabilities.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Computational Efficiency
Irrelevant Token Attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

SepLLM
Efficiency Improvement
Large Language Models
🔎 Similar Papers
No similar papers found.