CompDiff: Hierarchical Compositional Diffusion for Fair and Zero-Shot Intersectional Medical Image Generation

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of algorithmic bias in medical image generation caused by imbalanced training data, which hinders fair synthesis of images from rare or unseen demographic subgroups. To mitigate this, the authors propose CompDiff, a hierarchical compositional diffusion framework that decouples demographic conditioning information to enable parameter sharing and compositional generalization across subgroups at the representation level. Central to CompDiff is a novel Hierarchical Conditioning Network (HCN) that fuses structured demographic attributes with CLIP text embeddings via cross-attention mechanisms, facilitating zero-shot generation of underrepresented intersectional groups. Evaluated on MIMIC-CXR and FairGenMed, CompDiff substantially outperforms baseline methods, reducing overall FID to 64.3 (vs. 75.1), improving zero-shot cross-group FID by up to 21%, and simultaneously enhancing downstream classification AUROC while reducing demographic bias.

Technology Category

Application Category

📝 Abstract
Generative models are increasingly used to augment medical imaging datasets for fairer AI. Yet a key assumption often goes unexamined: that generators themselves produce equally high-quality images across demographic groups. Models trained on imbalanced data can inherit these imbalances, yielding degraded synthesis quality for rare subgroups and struggling with demographic intersections absent from training. We refer to this as the imbalanced generator problem. Existing remedies such as loss reweighting operate at the optimization level and provide limited benefit when training signal is scarce or absent for certain combinations. We propose CompDiff, a hierarchical compositional diffusion framework that addresses this problem at the representation level. A dedicated Hierarchical Conditioner Network (HCN) decomposes demographic conditioning, producing a demographic token concatenated with CLIP embeddings as cross-attention context. This structured factorization encourages parameter sharing across subgroups and supports compositional generalization to rare or unseen demographic intersections. Experiments on chest X-rays (MIMIC-CXR) and fundus images (FairGenMed) show that CompDiff compares favorably against both standard fine-tuning and FairDiffusion across image quality (FID: 64.3 vs. 75.1), subgroup equity (ES-FID), and zero-shot intersectional generalization (up to 21% FID improvement on held-out intersections). Downstream classifiers trained on CompDiff-generated data also show improved AUROC and reduced demographic bias, suggesting that architectural design of demographic conditioning is an important and underexplored factor in fair medical image generation. Code is available at https://anonymous.4open.science/r/CompDiff-6FE6.
Problem

Research questions and friction points this paper is trying to address.

imbalanced generator problem
fair medical image generation
zero-shot intersectional generalization
demographic bias
compositional generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

compositional diffusion
hierarchical conditioning
zero-shot generalization
fair medical image generation
demographic factorization
🔎 Similar Papers
No similar papers found.