🤖 AI Summary
Decoder-only large language models (LLMs) suffer from degraded text embedding quality due to causal attention constraints that impede effective long-range dependency modeling; existing single-token prefixing approaches further exacerbate information loss through excessive compression. To address this, we propose Hierarchical Token Prefixing (HTP): input sequences are partitioned into chunks, and each chunk is prefixed with a summary token derived from its upper-level abstraction, thereby establishing multi-path backward information flow. Additionally, mean pooling is integrated at the readout layer to mitigate representation squeezing. HTP jointly optimizes both attention dynamics and representation extraction without architectural assumptions, supporting both zero-shot and fine-tuned settings. Evaluated on 11 retrieval and 30 general-purpose embedding benchmarks, HTP consistently outperforms state-of-the-art methods, delivering particularly substantial gains on long-document tasks.
📝 Abstract
Large language models produce powerful text embeddings, but their causal attention mechanism restricts the flow of information from later to earlier tokens, degrading representation quality. While recent methods attempt to solve this by prepending a single summary token, they over-compress information, hence harming performance on long documents. We propose Hierarchical Token Prepending (HTP), a method that resolves two critical bottlenecks. To mitigate attention-level compression, HTP partitions the input into blocks and prepends block-level summary tokens to subsequent blocks, creating multiple pathways for backward information flow. To address readout-level over-squashing, we replace last-token pooling with mean-pooling, a choice supported by theoretical analysis. HTP achieves consistent performance gains across 11 retrieval datasets and 30 general embedding benchmarks, especially in long-context settings. As a simple, architecture-agnostic method, HTP enhances both zero-shot and finetuned models, offering a scalable route to superior long-document embeddings.