🤖 AI Summary
To address the limited diversity of minority-class samples generated by LLM-based oversampling methods—which undermines downstream classification robustness and generalization—this paper proposes a label-feature joint conditional generation framework. Our approach innovatively integrates conditional sampling, interpolation-augmented fine-tuning, and permutation-based fine-tuning, while leveraging information entropy theory to quantitatively measure and optimize generative diversity. We conduct systematic evaluations across 10 standard tabular datasets. Results demonstrate that our method significantly outperforms eight state-of-the-art baselines: it improves generative diversity by 23.6% (average), boosts downstream classification accuracy by 4.1% (average), and increases F1-score by 5.3%. The framework effectively mitigates class imbalance while preserving both sample fidelity and discriminative utility.
📝 Abstract
Oversampling is one of the most widely used approaches for addressing imbalanced classification. The core idea is to generate additional minority samples to rebalance the dataset. Most existing methods, such as SMOTE, require converting categorical variables into numerical vectors, which often leads to information loss. Recently, large language model (LLM)-based methods have been introduced to overcome this limitation. However, current LLM-based approaches typically generate minority samples with limited diversity, reducing robustness and generalizability in downstream classification tasks. To address this gap, we propose a novel LLM-based oversampling method designed to enhance diversity. First, we introduce a sampling strategy that conditions synthetic sample generation on both minority labels and features. Second, we develop a new permutation strategy for fine-tuning pre-trained LLMs. Third, we fine-tune the LLM not only on minority samples but also on interpolated samples to further enrich variability. Extensive experiments on 10 tabular datasets demonstrate that our method significantly outperforms eight SOTA baselines. The generated synthetic samples are both realistic and diverse. Moreover, we provide theoretical analysis through an entropy-based perspective, proving that our method encourages diversity in the generated samples.