Cropping outperforms dropout as an augmentation strategy for training self-supervised text embeddings

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of constructing high-quality text embedding models with minimal reliance on large-scale labeled data. We propose a lightweight self-supervised contrastive learning framework that replaces standard dropout with **sentence cropping** as the positive-sample augmentation strategy, demonstrating substantial improvements in embedding quality on both the MTEB benchmark and domain-specific datasets. Furthermore, we show that fine-tuning only the last 2–3 layers of a Transformer encoder suffices to approach supervised state-of-the-art performance, with embedding quality monotonically increasing as more top-layer parameters are tuned. Empirical results indicate that our method achieves over 98% of supervised baseline performance on domain data after only brief fine-tuning. This work establishes an efficient, scalable paradigm for text embedding modeling in low-resource settings, significantly reducing annotation dependency while maintaining competitive performance.

Technology Category

Application Category

📝 Abstract
Text embeddings, i.e. vector representations of entire texts, play an important role in many NLP applications, such as retrieval-augmented generation, sentiment analysis, clustering, or visualizing collections of texts for data exploration. Currently, top-performing embedding models are derived from pre-trained language models via extensive supervised fine-tuning using curated text pairs. This contrasts with computer vision, where self-supervised training based on data augmentations has demonstrated remarkable success. Here we systematically compare the two most well-known augmentation strategies for positive pair generation in contrastive learning of text embeddings. We assess embedding quality on MTEB and additional in-domain evaluations and show that cropping augmentation strongly outperforms the dropout-based approach. We find that on out-of-domain data, the quality of resulting embeddings is below the supervised SOTA models, but for in-domain data, self-supervised fine-tuning produces high-quality text embeddings after very short fine-tuning, sometimes only marginally below the supervised SOTA. Finally, we show that representation quality increases towards the last transformer layers, which undergo the largest change during fine-tuning; and that fine-tuning only those last layers is sufficient to reach similar embedding quality.
Problem

Research questions and friction points this paper is trying to address.

Compares cropping vs dropout for self-supervised text embeddings
Evaluates embedding quality on MTEB and in-domain data
Examines layer-wise impact during fine-tuning of transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cropping augmentation outperforms dropout strategy
Self-supervised fine-tuning for text embeddings
Fine-tuning last transformer layers suffices
🔎 Similar Papers
No similar papers found.