🤖 AI Summary
To address the excessive prompt length in code retrieval-augmented generation (RAG) caused by limited context windows and high computational overhead, this paper proposes the first lightweight, code-specific prompt compression framework. Methodologically: (1) it introduces a type-aware, priority-driven token pruning strategy that leverages program analysis to identify syntactic units and model their criticality; (2) it designs a compact language model compressor with a copy mechanism, enabling tunable compression ratios and semantic fidelity. The key innovation lies in integrating code structural priors into the compression process and achieving fine-grained, controllable compression. Evaluated on Assertion Generation, Bugs2Fix, and Code Suggestion tasks, our method outperforms state-of-the-art baselines by 23.4%, 28.7%, and 8.7%, respectively, while significantly mitigating context overflow and reducing inference cost.
📝 Abstract
Retrieval-Augmented Generation (RAG) enhances coding tasks by incorporating retrieved code examples into prompts. However, lengthy prompts, often exceeding tens of thousands of tokens, introduce challenges related to limited context windows of language models (LMs) and high computational costs. Existing prompt compression techniques focus on natural language, lacking tailored solutions for code. To address the gap, we propose CodePromptZip, a framework that compresses code examples before integrating into RAG workflows. Our framework employs a type-aware, priority-driven strategy to construct training samples for training code compression model. By using program analysis, we identify token types (e.g., Identifier) and perform ablation analysis to rank their removal priorities based on their impact on task performance. We then train a small LM as the compressor on these samples, enabling flexible compression conditioned on specified ratios while minimizing performance degradation. Specially, the compressor is augmented with a copy mechanism, allowing tokens to be directly copied from the original code snippets. Evaluation results show that CodePromptZip surpasses SOTA entropy-based and distillation-based baselines, improving by 23.4%, 28.7%, and 8.7% over the best baseline for Assertion Generation, Bugs2Fix, and Code Suggestion, respectively.