TransAug: Translate as Augmentation for Sentence Embeddings

📅 2021-10-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of small-scale sentence datasets in improving sentence embedding quality, this paper proposes a data augmentation paradigm leveraging translation sentence pairs. Methodologically, the approach comprises two stages: (1) knowledge distillation transfers the English SimCSE encoder to Chinese, yielding a semantically aligned bilingual encoder; (2) the Chinese encoder is frozen while only the English encoder is updated via cross-lingual contrastive learning, enabling implicit data augmentation and parameter-efficient optimization. To our knowledge, this is the first work to incorporate translation pairs into a sentence-level contrastive learning framework. Experiments demonstrate that our method achieves new state-of-the-art performance on the STS benchmarks, significantly outperforming SimCSE and Sentence-T5. Moreover, it attains superior results across multiple SentEval transfer tasks, validating its generalization capability and effectiveness.
📝 Abstract
While contrastive learning greatly advances the representation of sentence embeddings, it is still limited by the size of the existing sentence datasets. In this paper, we present TransAug (Translate as Augmentation), which provide the first exploration of utilizing translated sentence pairs as data augmentation for text, and introduce a two-stage paradigm to advances the state-of-the-art sentence embeddings. Instead of adopting an encoder trained in other languages setting, we first distill a Chinese encoder from a SimCSE encoder (pretrained in English), so that their embeddings are close in semantic space, which can be regraded as implicit data augmentation. Then, we only update the English encoder via cross-lingual contrastive learning and frozen the distilled Chinese encoder. Our approach achieves a new state-of-art on standard semantic textual similarity (STS), outperforming both SimCSE and Sentence-T5, and the best performance in corresponding tracks on transfer tasks evaluated by SentEval.
Problem

Research questions and friction points this paper is trying to address.

Enhancing sentence embeddings via translation-based data augmentation
Overcoming dataset size limits in contrastive learning for text
Improving cross-lingual semantic similarity with distilled encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes translated sentence pairs for augmentation
Distills Chinese encoder from English SimCSE
Employs cross-lingual contrastive learning
🔎 Similar Papers
No similar papers found.