Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch Mining

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In contrastive learning, representation degradation arises from limited quantity and quality of in-batch negative samples, and performance heavily relies on large batch sizes. To address this, we propose a teacher-guided intelligent batch sampling method: first, a pretrained teacher model constructs a sparse similarity graph over the training data; then, community detection (via the Louvain algorithm) identifies semantically cohesive sample clusters; finally, batches are dynamically assembled to contain highly discriminative negatives drawn from distinct communities. This is the first work to leverage community discovery for contrastive batch construction, effectively decoupling performance from large-batch requirements—achieving state-of-the-art results even at batch size 64. On the MMEB 36-task benchmark, our method improves scores by +1.3 and +2.9 points for 7B- and 2B-parameter models, respectively, while using only 1/4–1/16 the batch size of prior approaches.

Technology Category

Application Category

📝 Abstract
Contrastive learning (CL) is a prevalent technique for training embedding models, which pulls semantically similar examples (positives) closer in the representation space while pushing dissimilar ones (negatives) further apart. A key source of negatives are 'in-batch' examples, i.e., positives from other examples in the batch. Effectiveness of such models is hence strongly influenced by the size and quality of training batches. In this work, we propose 'Breaking the Batch Barrier' (B3), a novel batch construction strategy designed to curate high-quality batches for CL. Our approach begins by using a pretrained teacher embedding model to rank all examples in the dataset, from which a sparse similarity graph is constructed. A community detection algorithm is then applied to this graph to identify clusters of examples that serve as strong negatives for one another. The clusters are then used to construct batches that are rich in in-batch negatives. Empirical results on the MMEB multimodal embedding benchmark (36 tasks) demonstrate that our method sets a new state of the art, outperforming previous best methods by +1.3 and +2.9 points at the 7B and 2B model scales, respectively. Notably, models trained with B3 surpass existing state-of-the-art results even with a batch size as small as 64, which is 4-16x smaller than that required by other methods.
Problem

Research questions and friction points this paper is trying to address.

Improves contrastive learning by optimizing batch construction
Enhances batch quality via similarity graph and community detection
Achieves state-of-the-art results with significantly smaller batch sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pretrained teacher model for example ranking
Constructs sparse similarity graph for clustering
Applies community detection to create negative-rich batches
🔎 Similar Papers
No similar papers found.