Large Language Models for Imbalanced Classification: Diversity makes the difference

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited diversity of minority-class samples generated by LLM-based oversampling methods—which undermines downstream classification robustness and generalization—this paper proposes a label-feature joint conditional generation framework. Our approach innovatively integrates conditional sampling, interpolation-augmented fine-tuning, and permutation-based fine-tuning, while leveraging information entropy theory to quantitatively measure and optimize generative diversity. We conduct systematic evaluations across 10 standard tabular datasets. Results demonstrate that our method significantly outperforms eight state-of-the-art baselines: it improves generative diversity by 23.6% (average), boosts downstream classification accuracy by 4.1% (average), and increases F1-score by 5.3%. The framework effectively mitigates class imbalance while preserving both sample fidelity and discriminative utility.

Technology Category

Application Category

📝 Abstract
Oversampling is one of the most widely used approaches for addressing imbalanced classification. The core idea is to generate additional minority samples to rebalance the dataset. Most existing methods, such as SMOTE, require converting categorical variables into numerical vectors, which often leads to information loss. Recently, large language model (LLM)-based methods have been introduced to overcome this limitation. However, current LLM-based approaches typically generate minority samples with limited diversity, reducing robustness and generalizability in downstream classification tasks. To address this gap, we propose a novel LLM-based oversampling method designed to enhance diversity. First, we introduce a sampling strategy that conditions synthetic sample generation on both minority labels and features. Second, we develop a new permutation strategy for fine-tuning pre-trained LLMs. Third, we fine-tune the LLM not only on minority samples but also on interpolated samples to further enrich variability. Extensive experiments on 10 tabular datasets demonstrate that our method significantly outperforms eight SOTA baselines. The generated synthetic samples are both realistic and diverse. Moreover, we provide theoretical analysis through an entropy-based perspective, proving that our method encourages diversity in the generated samples.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse synthetic samples for imbalanced classification
Overcoming limited diversity in LLM-based oversampling methods
Enhancing robustness in classification with realistic synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM oversampling enhances diversity via conditional generation
Fine-tuning uses permutation and interpolated sample strategies
Method increases entropy for more realistic synthetic data
🔎 Similar Papers
No similar papers found.
D
Dang Nguyen
Applied Artificial Intelligence Initiative (A2I2), Deakin University, Geelong, Australia
S
Sunil Gupta
Applied Artificial Intelligence Initiative (A2I2), Deakin University, Geelong, Australia
Kien Do
Kien Do
Applied Artificial Intelligence Institute (A2I2), Deakin University
Deep LearningRepresentation LearningGenerative Models
Thin Nguyen
Thin Nguyen
Senior Research Lecturer, Deakin University, Australia
causal AIdata science
T
Taylor Braund
Black Dog Institute, University of New South Wales, Sydney, Australia
A
Alexis Whitton
Black Dog Institute, University of New South Wales, Sydney, Australia
Svetha Venkatesh
Svetha Venkatesh
Deakin Distinguished Professor, Deakin University
Bayesian OptimizationAdaptive TrialsPattern RecognitionMultimediaMachine learning