Conditional-$t^3$VAE: Equitable Latent Space Allocation for Fair Generation

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unfair latent space allocation and poor generation quality for tail classes in class-imbalanced data, this paper proposes Conditional-t³VAE. First, it introduces a class-conditional Student’s t joint prior—marking the first such use in VAEs—to explicitly model heavy-tailed structure and balance class-wise latent distributions. Second, it derives a closed-form, differentiable objective based on γ-divergence, jointly optimizing for generation fairness and robustness against outliers. Third, it employs an equal-weight latent variable mixing strategy to ensure balanced sampling across classes. Evaluated on long-tailed benchmarks—including SVHN-LT, CIFAR-100-LT, and CelebA—Conditional-t³VAE achieves significantly lower FID scores and higher F1 scores compared to both t³VAE and Gaussian VAE baselines. Notably, it delivers substantial gains in generation fairness under severe class imbalance, demonstrating superior calibration of tail-class representation and synthesis.

Technology Category

Application Category

📝 Abstract
Variational Autoencoders (VAEs) with global priors mirror the training set's class frequency in latent space, underrepresenting tail classes and reducing generative fairness on imbalanced datasets. While $t^3$VAE improves robustness via heavy-tailed Student's t-distribution priors, it still allocates latent volume proportionally to the class frequency.In this work, we address this issue by explicitly enforcing equitable latent space allocation across classes. To this end, we propose Conditional-$t^3$VAE, which defines a per-class mbox{Student's t} joint prior over latent and output variables, preventing dominance by majority classes. Our model is optimized using a closed-form objective derived from the $γ$-power divergence. Moreover, for class-balanced generation, we derive an equal-weight latent mixture of Student's t-distributions. On SVHN-LT, CIFAR100-LT, and CelebA, Conditional-$t^3$VAE consistently achieves lower FID scores than both $t^3$VAE and Gaussian-based VAE baselines, particularly under severe class imbalance. In per-class F1 evaluations, Conditional-$t^3$VAE also outperforms the conditional Gaussian VAE across all highly imbalanced settings. While Gaussian-based models remain competitive under mild imbalance ratio ($ρlesssim 3$), our approach substantially improves generative fairness and diversity in more extreme regimes.
Problem

Research questions and friction points this paper is trying to address.

Addresses inequitable latent space allocation in VAEs
Improves generative fairness for minority classes
Enhances diversity under severe class imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Per-class Student's t joint prior allocation
Closed-form optimization using γ-power divergence
Equal-weight latent mixture for balanced generation
A
Aymene Mohammed Bouayed
DIÉNS, ÉNS, CNRS, PSL University, Paris, France
Samuel Deslauriers-Gauthier
Samuel Deslauriers-Gauthier
Inria Centre at Université Côte d'Azur
A
Adrian Iaccovelli
Be-Ys Research, France
D
David Naccache
DIÉNS, ÉNS, CNRS, PSL University, Paris, France