🤖 AI Summary
To address the challenge of distinguishing fine-grained semantic differences in text embedding models, this paper proposes a multi-granularity hard negative sampling framework and an anchor-token-aware pooling method. The former leverages large language models to generate hierarchically structured, semantically proximal negative samples, enabling a coarse-to-fine curriculum learning paradigm. The latter enhances text representation through keyword-token-weighted aggregation, improving semantic sensitivity without increasing model parameters. Evaluated on the MTEB benchmark, our approach significantly outperforms existing negative sampling and pooling strategies across both synthetic and public retrieval tasks, achieving state-of-the-art (SOTA) performance. Comprehensive experiments demonstrate its effectiveness in fine-grained semantic modeling and strong generalization capability across diverse downstream applications.
📝 Abstract
Text embedding models are essential for various natural language processing tasks, enabling the effective encoding of semantic information into dense vector representations. These models are typically optimized using triplets of (query, positive, negative) data pairs for contrastive learning, where the negative samples play a critical role in enhancing the model's ability to discern subtle semantic distinctions. In this work, we introduce a Multi-Granularity Hard-negative (MGH) synthesis framework that leverages large language models (LLMs) to generate diverse negative samples with varying levels of similarity with the query. This approach facilitates a coarse-to-fine curriculum learning strategy during supervised training, allowing the embedding model to progressively learn more nuanced semantic representations. Meanwhile, we propose an Anchor Token Aware (ATA) pooling method that assigns higher weights to anchor tokens based on aggregation patterns observed in LLMs, improving text embedding accuracy without increasing model complexity. Comprehensive experiments on the MTEB benchmark demonstrate that our methods achieve state-of-the-art performance, surpassing existing synthesis strategies both with synthetic data and when combined with public retrieval datasets.