Negative Matters: Multi-Granularity Hard-Negative Synthesis and Anchor-Token-Aware Pooling for Enhanced Text Embeddings

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of distinguishing fine-grained semantic differences in text embedding models, this paper proposes a multi-granularity hard negative sampling framework and an anchor-token-aware pooling method. The former leverages large language models to generate hierarchically structured, semantically proximal negative samples, enabling a coarse-to-fine curriculum learning paradigm. The latter enhances text representation through keyword-token-weighted aggregation, improving semantic sensitivity without increasing model parameters. Evaluated on the MTEB benchmark, our approach significantly outperforms existing negative sampling and pooling strategies across both synthetic and public retrieval tasks, achieving state-of-the-art (SOTA) performance. Comprehensive experiments demonstrate its effectiveness in fine-grained semantic modeling and strong generalization capability across diverse downstream applications.

Technology Category

Application Category

📝 Abstract
Text embedding models are essential for various natural language processing tasks, enabling the effective encoding of semantic information into dense vector representations. These models are typically optimized using triplets of (query, positive, negative) data pairs for contrastive learning, where the negative samples play a critical role in enhancing the model's ability to discern subtle semantic distinctions. In this work, we introduce a Multi-Granularity Hard-negative (MGH) synthesis framework that leverages large language models (LLMs) to generate diverse negative samples with varying levels of similarity with the query. This approach facilitates a coarse-to-fine curriculum learning strategy during supervised training, allowing the embedding model to progressively learn more nuanced semantic representations. Meanwhile, we propose an Anchor Token Aware (ATA) pooling method that assigns higher weights to anchor tokens based on aggregation patterns observed in LLMs, improving text embedding accuracy without increasing model complexity. Comprehensive experiments on the MTEB benchmark demonstrate that our methods achieve state-of-the-art performance, surpassing existing synthesis strategies both with synthetic data and when combined with public retrieval datasets.
Problem

Research questions and friction points this paper is trying to address.

Generating multi-granularity hard negatives using LLMs
Improving text embedding through anchor-token-aware pooling
Enhancing semantic distinction in contrastive learning frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Granularity Hard-negative synthesis using LLMs
Coarse-to-fine curriculum learning strategy
Anchor Token Aware pooling method
T
Tengyu Pan
Tsinghua University, Beijing, China
Zhichao Duan
Zhichao Duan
Tsinghua University
Natural Language Processing
Z
Zhenyu Li
Tsinghua University, Beijing, China
B
Bowen Dong
Tsinghua University, Beijing, China
N
Ning Liu
Shandong University, Shandong, China
Xiuxing Li
Xiuxing Li
Associate Research Fellow at the School of Computer Science, Beijing Institute of Technology
DataMining,Konwledge base,Knowledge embedding
J
Jianyong Wang
Tsinghua University, Beijing, China