🤖 AI Summary
To address the high API cost and latency induced by long contexts in code generation, existing context pruning methods overlook structural and semantic dependencies in code, limiting their effectiveness. This paper introduces LongCodeZip, a lightweight compression framework specifically designed for large language models in code-related tasks. It innovatively integrates function-level coarse-grained filtering with adaptive token-budget-driven block-level fine-grained selection: first assessing function relevance via conditional perplexity, then performing two-stage compression using block-wise perplexity and dynamic budget control. Fully plug-and-play and requiring no fine-tuning, LongCodeZip preserves both syntactic validity and semantic coherence. Evaluated across diverse code tasks—including code generation, summarization, and question answering—it achieves up to a 5.6× context compression ratio, significantly reducing latency and inference cost while maintaining or even improving task performance.
📝 Abstract
Code generation under long contexts is becoming increasingly critical as Large Language Models (LLMs) are required to reason over extensive information in the codebase. While recent advances enable code LLMs to process long inputs, high API costs and generation latency remain substantial bottlenecks. Existing context pruning techniques, such as LLMLingua, achieve promising results for general text but overlook code-specific structures and dependencies, leading to suboptimal performance in programming tasks. In this paper, we propose LongCodeZip, a novel plug-and-play code compression framework designed specifically for code LLMs. LongCodeZip employs a dual-stage strategy: (1) coarse-grained compression, which identifies and ranks function-level chunks using conditional perplexity with respect to the instruction, retaining only the most relevant functions; and (2) fine-grained compression, which segments retained functions into blocks based on perplexity and selects an optimal subset under an adaptive token budget to maximize relevance. Evaluations across multiple tasks, including code completion, summarization, and question answering, show that LongCodeZip consistently outperforms baseline methods, achieving up to a 5.6x compression ratio without degrading task performance. By effectively reducing context size while preserving essential information, LongCodeZip enables LLMs to better scale to real-world, large-scale code scenarios, advancing the efficiency and capability of code intelligence applications.