🤖 AI Summary
Existing LLM-based embedding methods predominantly adopt encoder-only architectures, treating models as static feature extractors and thus struggling to capture deep semantic structures effectively.
Method: This paper proposes GIRCSE, the first framework to integrate autoregressive generation with iterative contrastive optimization for text embedding. It generates soft token sequences and progressively refines semantic representations across iterations, overcoming limitations in implicit semantic modeling. The method introduces an iterative contrastive learning objective and identifies an emergent scaling phenomenon: extending generation length during inference consistently improves embedding quality.
Contribution/Results: GIRCSE achieves significant gains over state-of-the-art encoder-based baselines on the MTEB benchmark and instruction-following tasks, demonstrating the effectiveness, scalability, and generalization superiority of the generative embedding paradigm.
📝 Abstract
Existing large language model (LLM)-based embeddings typically adopt an encoder-only paradigm, treating LLMs as static feature extractors and overlooking their core generative strengths. We introduce GIRCSE (Generative Iterative Refinement for Contrastive Sentence Embeddings), a novel framework that leverages autoregressive generation to iteratively refine semantic representations. By producing sequences of soft tokens optimized under contrastive objective, GIRCSE captures latent concepts and implicit semantics that encoder-only methods often miss. To guide this process, we propose an Iterative Contrastive Refinement (ICR) objective that encourages each refinement step to yield better representations. Extensive experiments show that GIRCSE outperforms strong LLM-based embedding baselines on the MTEB benchmark and instruction-following tasks. Moreover, GIRCSE exhibits an emergent test-time scaling property: generating more tokens at inference steadily improves embedding quality. Our results establish generative iterative refinement as a new paradigm for representation learning.