Cmprsr: Abstractive Token-Level Question-Agnostic Prompt Compressor

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long prompts for black-box large language models (LLMs) incur high inference costs. Method: This paper proposes a novel prompt compression paradigm that employs a lightweight open-source model (Qwen3-4B) as a trainable compressor—replacing conventional uncontrolled extractive or abstractive approaches. We introduce the first systematic benchmark tailored for LLM prompt compressors and formulate a joint optimization objective balancing compression ratio constraints with downstream task performance. Compression is optimized via TextGrad-driven meta-prompt tuning, followed by supervised fine-tuning (SFT) and group-relative policy optimization (GRPO). Contribution/Results: Experiments on MeetingBank, LongBench, and GSM8K demonstrate substantial improvements over state-of-the-art extractive and abstractive baselines. The method exhibits strong generalization across diverse compression ratios and achieves superior cost-quality trade-offs.

Technology Category

Application Category

📝 Abstract
Motivated by the high costs of using black-box Large Language Models (LLMs), we introduce a novel prompt compression paradigm, under which we use smaller LLMs to compress inputs for the larger ones. We present the first comprehensive LLM-as-a-compressor benchmark spanning 25 open- and closed-source models, which reveals significant disparity in models' compression ability in terms of (i) preserving semantically important information (ii) following the user-provided compression rate (CR). We further improve the performance of gpt-4.1-mini, the best overall vanilla compressor, with Textgrad-based compression meta-prompt optimization. We also identify the most promising open-source vanilla LLM - Qwen3-4B - and post-train it with a combination of supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO), pursuing the dual objective of CR adherence and maximizing the downstream task performance. We call the resulting model Cmprsr and demonstrate its superiority over both extractive and vanilla abstractive compression across the entire range of compression rates on lengthy inputs from MeetingBank and LongBench as well as short prompts from GSM8k. The latter highlights Cmprsr's generalizability across varying input lengths and domains. Moreover, Cmprsr closely follows the requested compression rate, offering fine control over the cost-quality trade-off.
Problem

Research questions and friction points this paper is trying to address.

Reducing costs of using black-box LLMs through prompt compression
Improving compression rate adherence while preserving semantic information
Developing a generalizable compressor for varying input lengths and domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses smaller LLMs to compress inputs for larger models
Optimizes compression via Textgrad-based meta-prompt optimization
Post-trains models with SFT and GRPO for dual objectives
🔎 Similar Papers
No similar papers found.
Ivan Zakazov
Ivan Zakazov
EPFL
natural language processingmedical imaging
A
Alexander Sharipov
EPFL
B
Berke Argin
EPFL
O
Oussama Gabouj
EPFL
K
Kamel Charaf
EPFL
A
Alexi Semiz
EPFL
L
Lorenzo Drudi
EPFL
N
Nicolas Baldwin
EPFL
R
Robert West
EPFL