Semantics-Aware Generative Latent Data Augmentation for Learning in Low-Resource Domains

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of deep learning models in low-resource and long-tailed scenarios where labeled data are scarce. To this end, we propose GeLDA, a novel framework that, for the first time, introduces a semantic-aware conditional diffusion generation mechanism into the low-dimensional semantic latent space induced by foundation models such as Whisper. By leveraging auxiliary feature vectors to model semantic relationships across classes or subdomains, GeLDA enables high-quality data augmentation. Our method substantially improves model generalization under both low-resource and zero-shot settings: on the zero-shot, text-independent speech emotion recognition task, it boosts the unweighted average recall of the Whisper-large baseline by 6.13%; on the ImageNet-LT long-tailed classification benchmark, it achieves a state-of-the-art accuracy of 74.7% on tail classes.

Technology Category

Application Category

📝 Abstract
Despite strong performance in data-rich regimes, deep learning often underperforms in the data-scarce settings common in practice. While foundation models (FMs) trained on massive datasets demonstrate strong generalization by extracting general-purpose features, they can still suffer from scarce labeled data during downstream fine-tuning. To address this, we propose GeLDA, a semantics-aware generative latent data augmentation framework that leverages conditional diffusion models to synthesize samples in an FM-induced latent space. Because this space is low-dimensional and concentrates task-relevant information compared to the input space, GeLDA enables efficient, high-quality data generation. GeLDA conditions generation on auxiliary feature vectors that capture semantic relationships among classes or subdomains, facilitating data augmentation in low-resource domains. We validate GeLDA in two large-scale recognition tasks: (a) in zero-shot language-specific speech emotion recognition, GeLDA improves the Whisper-large baseline's unweighted average recall by 6.13%; and (b) in long-tailed image classification, it achieves 74.7% tail-class accuracy on ImageNet-LT, setting a new state-of-the-art result.
Problem

Research questions and friction points this paper is trying to address.

low-resource learning
data scarcity
few-shot learning
data augmentation
downstream fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent data augmentation
conditional diffusion models
foundation models
low-resource learning
semantic-aware generation
🔎 Similar Papers
No similar papers found.