Graph Conditioned Diffusion for Controllable Histopathology Image Generation

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models for medical image generation operate in unstructured noise spaces, hindering fine-grained, semantically consistent control over anatomical structures in pathological images. Method: We propose a graph-conditioned diffusion generative framework that, for the first time, encodes spatial relationships and morphological characteristics of tissue structures as graph-structured data—serving as a semantic prior injected into the diffusion process. By integrating graph neural networks with Transformer modules, our method enables object-level, fine-grained manipulation within the noisy latent space. Contribution/Results: The framework significantly improves anatomical plausibility and interpretability of generated images. Evaluated on real pathological image synthesis, the generated data effectively substitutes real annotated samples for downstream segmentation tasks, achieving performance on par with models trained on ground-truth data. This establishes a novel paradigm for few-shot medical image generation.

Technology Category

Application Category

📝 Abstract
Recent advances in Diffusion Probabilistic Models (DPMs) have set new standards in high-quality image synthesis. Yet, controlled generation remains challenging, particularly in sensitive areas such as medical imaging. Medical images feature inherent structure such as consistent spatial arrangement, shape or texture, all of which are critical for diagnosis. However, existing DPMs operate in noisy latent spaces that lack semantic structure and strong priors, making it difficult to ensure meaningful control over generated content. To address this, we propose graph-based object-level representations for Graph-Conditioned-Diffusion. Our approach generates graph nodes corresponding to each major structure in the image, encapsulating their individual features and relationships. These graph representations are processed by a transformer module and integrated into a diffusion model via the text-conditioning mechanism, enabling fine-grained control over generation. We evaluate this approach using a real-world histopathology use case, demonstrating that our generated data can reliably substitute for annotated patient data in downstream segmentation tasks. The code is available here.
Problem

Research questions and friction points this paper is trying to address.

Generating histopathology images with controlled structural features
Addressing semantic structure limitations in diffusion models
Enabling fine-grained control over medical image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based object-level representations for histopathology generation
Graph nodes encode structure features and relationships
Transformer-processed graphs integrated via text-conditioning mechanism
🔎 Similar Papers
No similar papers found.
S
Sarah Cechnicka
Department of Computing, Imperial College London, UK
Matthew Baugh
Matthew Baugh
Imperial College London
machine learninganomaly detectionmedical imagingoutlier detection
W
Weitong Zhang
Department of Computing, Imperial College London, UK
Mischa Dombrowski
Mischa Dombrowski
Friedrich-Alexander-Universität Erlangen-Nürnberg
Diffusion ModelsPrivacyUnconditional Image GenerationMedical Applications
Z
Zhe Li
Dept. AIBE, Friedrich–Alexander University Erlangen–Nürnberg, DE
Johannes C. Paetzold
Johannes C. Paetzold
Cornell University, Weill Cornell Medicine
Machine LearningGeometric Deep LearningGenerative ModelsBiomedical Image Analysis
C
Candice Roufosse
Department of Computing, Imperial College London, UK; Centre for Inflammatory Disease, Imperial College London, London, UK
Bernhard Kainz
Bernhard Kainz
FAU Erlangen-Nürnberg, Imperial College London
human-in-the-loop computingmachine learningmedical image analysis