🤖 AI Summary
Long prompts for black-box large language models (LLMs) incur high inference costs. Method: This paper proposes a novel prompt compression paradigm that employs a lightweight open-source model (Qwen3-4B) as a trainable compressor—replacing conventional uncontrolled extractive or abstractive approaches. We introduce the first systematic benchmark tailored for LLM prompt compressors and formulate a joint optimization objective balancing compression ratio constraints with downstream task performance. Compression is optimized via TextGrad-driven meta-prompt tuning, followed by supervised fine-tuning (SFT) and group-relative policy optimization (GRPO). Contribution/Results: Experiments on MeetingBank, LongBench, and GSM8K demonstrate substantial improvements over state-of-the-art extractive and abstractive baselines. The method exhibits strong generalization across diverse compression ratios and achieves superior cost-quality trade-offs.
📝 Abstract
Motivated by the high costs of using black-box Large Language Models (LLMs), we introduce a novel prompt compression paradigm, under which we use smaller LLMs to compress inputs for the larger ones. We present the first comprehensive LLM-as-a-compressor benchmark spanning 25 open- and closed-source models, which reveals significant disparity in models' compression ability in terms of (i) preserving semantically important information (ii) following the user-provided compression rate (CR). We further improve the performance of gpt-4.1-mini, the best overall vanilla compressor, with Textgrad-based compression meta-prompt optimization. We also identify the most promising open-source vanilla LLM - Qwen3-4B - and post-train it with a combination of supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO), pursuing the dual objective of CR adherence and maximizing the downstream task performance. We call the resulting model Cmprsr and demonstrate its superiority over both extractive and vanilla abstractive compression across the entire range of compression rates on lengthy inputs from MeetingBank and LongBench as well as short prompts from GSM8k. The latter highlights Cmprsr's generalizability across varying input lengths and domains. Moreover, Cmprsr closely follows the requested compression rate, offering fine control over the cost-quality trade-off.