SMOTExT: SMOTE meets Large Language Models

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data scarcity and class imbalance in domain-specific or low-resource NLP scenarios, this paper pioneers the adaptation of SMOTE to text by proposing a linear interpolation-based synthetic data generation method operating in the semantic space of BERT embeddings. Coupled with the xRAG cross-modal retrieval-augmented generation framework, the approach decodes high-quality, semantically coherent synthetic samples. This establishes a semantic interpolation augmentation paradigm in text space, enabling privacy-preserving learning without access to original sensitive data. Experiments demonstrate that models trained solely on synthetic data achieve over 92% of the performance attained using original data in few-shot settings, while significantly improving robustness. Moreover, the method exhibits inherent compatibility with knowledge distillation and differential privacy mechanisms. By bridging semantic representation, controllable synthesis, and privacy-aware learning, this work provides a novel pathway for NLP modeling under data-constrained and privacy-sensitive conditions.

Technology Category

Application Category

📝 Abstract
Data scarcity and class imbalance are persistent challenges in training robust NLP models, especially in specialized domains or low-resource settings. We propose a novel technique, SMOTExT, that adapts the idea of Synthetic Minority Over-sampling (SMOTE) to textual data. Our method generates new synthetic examples by interpolating between BERT-based embeddings of two existing examples and then decoding the resulting latent point into text with xRAG architecture. By leveraging xRAG's cross-modal retrieval-generation framework, we can effectively turn interpolated vectors into coherent text. While this is preliminary work supported by qualitative outputs only, the method shows strong potential for knowledge distillation and data augmentation in few-shot settings. Notably, our approach also shows promise for privacy-preserving machine learning: in early experiments, training models solely on generated data achieved comparable performance to models trained on the original dataset. This suggests a viable path toward safe and effective learning under data protection constraints.
Problem

Research questions and friction points this paper is trying to address.

Addressing data scarcity and class imbalance in NLP model training
Generating synthetic text via SMOTE adaptation with BERT and xRAG
Enabling privacy-preserving ML through synthetic data comparable to original
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts SMOTE to text via BERT embeddings
Uses xRAG to decode vectors into text
Enables privacy-preserving synthetic data generation
🔎 Similar Papers
No similar papers found.
M
Mateusz Bystroński
Wrocław University of Science and Technology
M
Mikolaj Holysz
Wrocław University of Science and Technology
G
Grzegorz Piotrowski
Wrocław University of Science and Technology
N
Nitesh V. Chawla
University of Notre Dame
Tomasz Kajdanowicz
Tomasz Kajdanowicz
Wroclaw University of Technology
Data ScienceMachine LearningRepresentation Learning