Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to produce high-quality holistic text representations due to their pretraining objective—autoregressive token-level prediction—which inherently prioritizes local lexical coherence over global semantic structure. To address this, we propose *context compression*, a novel unsupervised pretraining task wherein the model encodes long input contexts into a compact sequence of memory tokens and reconstructs the original sequence from them. This paradigm shifts focus from token-level modeling to holistic representation learning and explicitly enforces global semantic consistency via contrastive learning. Based on this objective, we introduce LLM2Comp—a lightweight, efficient encoder derived from frozen LLM backbones. Empirical evaluation shows that LLM2Comp significantly outperforms state-of-the-art LLM-based text encoders (e.g., Instructor, BGE) on downstream tasks including text classification and semantic retrieval, while requiring only 20–33% of their training data. It achieves superior sample efficiency, stronger generalization across domains, and reduced inference latency.

Technology Category

Application Category

📝 Abstract
Text representation plays a critical role in tasks like clustering, retrieval, and other downstream applications. With the emergence of large language models (LLMs), there is increasing interest in harnessing their capabilities for this purpose. However, most of the LLMs are inherently causal and optimized for next-token prediction, making them suboptimal for producing holistic representations. To address this, recent studies introduced pretext tasks to adapt LLMs for text representation. Most of these tasks, however, rely on token-level prediction objectives, such as the masked next-token prediction (MNTP) used in LLM2Vec. In this work, we explore the untapped potential of context compression as a pretext task for unsupervised adaptation of LLMs. During compression pre-training, the model learns to generate compact memory tokens, which substitute the whole context for downstream sequence prediction. Experiments demonstrate that a well-designed compression objective can significantly enhance LLM-based text representations, outperforming models trained with token-level pretext tasks. Further improvements through contrastive learning produce a strong representation model (LLM2Comp) that outperforms contemporary LLM-based text encoders on a wide range of tasks while being more sample-efficient, requiring significantly less training data.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLMs for holistic text representation beyond next-token prediction
Replacing token-level pretext tasks with context compression for better representations
Enhancing sample efficiency and performance across diverse text understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses context compression as pretext task
Generates compact memory tokens for prediction
Enhances representation via contrastive learning
🔎 Similar Papers
No similar papers found.