D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models

📅 2024-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the explosive memory growth of KV caches and semantic degradation/hallucination caused by conventional attention-score-based eviction strategies in long-sequence generation with large language models (LLMs), this paper proposes a fine-tuning-free, dual-granularity dynamic discriminative compression method. At the layer granularity, it adaptively allocates the KV retention ratio per layer based on attention density; at the token granularity, it employs similarity-threshold-driven retrieval and merging to preserve critical semantic content. The method is fully unsupervised and architecture-agnostic. Evaluated across diverse LLMs, it achieves over 3× inference throughput improvement and substantial GPU memory reduction, while maintaining high-quality long-text generation. This work establishes a novel paradigm for efficient long-context inference.

Technology Category

Application Category

📝 Abstract
Generative inference in Large Language Models (LLMs) is impeded by the growing memory demands of Key-Value (KV) cache, especially for longer sequences. Traditional KV cache eviction strategies, which discard less critical KV pairs based on attention scores, often degrade generation quality, leading to issues such as context loss or hallucinations. In this work, we introduce Dynamic Discriminative Operations (D2O), a KV cache compression method that optimizes KV cache size dynamically and discriminatively at two levels without fine-tuning, while preserving essential context. At layer level, D2O leverages the varying densities of attention weights between shallow and deep layers to dynamically determine which layers should avoid excessive eviction via a novel dynamic allocation strategy to minimize information loss. At token level, D2O incorporates a compensation mechanism that maintains a similarity threshold to re-discriminate the importance of currently discarded tokens, determining whether they should be recalled and merged with similar tokens. We conduct experiments on various benchmarks and LLM architectures. Our results show that D2O not only achieves significant memory savings and enhances inference throughput by more than 3$ imes$ but also maintains high-quality long-text generation.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory demands of KV cache in LLMs
Improves long-context inference without quality loss
Enhances inference throughput and memory efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic KV cache compression without fine-tuning
Layer-level dynamic allocation to minimize information loss
Token-level compensation mechanism for maintaining context quality
🔎 Similar Papers
No similar papers found.
Zhongwei Wan
Zhongwei Wan
The Ohio State University, PhD student
LLMMultimodalNLP
X
Xinjian Wu
University of Chinese Academy of Sciences
Y
Yu Zhang
Tongji University
Yi Xin
Yi Xin
California Institute of Technology
Industrial OrganizationEconometrics
C
Chaofan Tao
The University of Hong Kong
Z
Zhihong Zhu
Peking University
X
Xin Wang
The Ohio State University
Siqi Luo
Siqi Luo
Shanghai Jiao Tong university
AIGCComputer VisionImage EditingAI4Science
J
Jing Xiong
The University of Hong Kong
Longyue Wang
Longyue Wang
Alibaba International
Large Language ModelMachine TranslationNatural Language ProcessingLanguange Agent
M
Mi Zhang
The Ohio State University