🤖 AI Summary
To address the substantial memory and computational overhead of multi-example prompting in large language model (LLM) in-context learning, this paper proposes MemCom—a hierarchical soft-token prompt compression method. MemCom performs fine-grained compression of intermediate representations across Transformer layers and introduces lightweight, trainable compressor networks to jointly optimize soft-token embeddings and downstream task performance. Evaluated on Gemma and Mistral architectures (2B–7B parameter models), MemCom achieves stable performance under high compression ratios (3×–8×), with accuracy degradation ≤10%, significantly outperforming baseline methods that suffer 20%–30% performance loss. To our knowledge, MemCom is the first approach to enable high-fidelity, scalable multi-example prompt compression—preserving task accuracy while drastically reducing memory footprint and inference latency.
📝 Abstract
Large Language Models (LLMs) have been shown to be able to learn different tasks without explicit finetuning when given many input-output examples / demonstrations through In-Context Learning (ICL). Increasing the number of examples, called ``shots'', improves downstream task performance but incurs higher memory and computational costs. In this work, we study an approach to improve the memory and computational efficiency of ICL inference by compressing the many-shot prompts. Given many shots comprising t tokens, our goal is to generate a m soft-token summary, where m < t. We first show that existing prompt compression methods are ineffective for many-shot compression, and simply using fewer shots as a baseline is surprisingly strong. To achieve effective compression, we find that: (a) a stronger compressor model with more trainable parameters is necessary, and (b) compressing many-shot representations at each transformer layer enables more fine-grained compression by providing each layer with its own compressed representation. Based on these insights, we propose MemCom, a layer-wise compression method. We systematically evaluate various compressor models and training approaches across different model sizes (2B and 7B), architectures (Gemma and Mistral), many-shot sequence lengths (3k-6k tokens), and compression ratios (3x to 8x). MemCom outperforms strong baselines across all compression ratios on multiple classification tasks with large label sets. Notably, while baseline performance degrades sharply at higher compression ratios, often by over 20-30%, MemCom maintains high accuracy with minimal degradation, typically dropping by less than 10%.