RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards Diverse Medical Image Generation using Vision-Language Foundation Models

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient fine-grained text-image alignment in medical image generation—limiting clinical diagnostic utility—this paper proposes a multi-stage controllable reinforcement learning (RL) framework. Leveraging a vision-language foundation model (VLFM) for semantic priors and Stable Diffusion as the generative backbone, we design a clinically oriented semantic alignment reward function and iteratively optimize region-level text-image matching via the DDPO policy gradient algorithm. This work is the first to introduce controllable RL into medical diffusion-based generation, enabling diverse clinical prompt conditioning and subgroup-aware data augmentation. Evaluated on a dermatological imaging dataset, our method achieves a 21.3% reduction in FID and an 18.6% improvement in CLIP-Score over baselines; clinical expert assessments confirm statistically significant superiority over fine-tuning approaches. Furthermore, downstream rare-disease classifier accuracy improves by 12.7%, demonstrating enhanced diagnostic relevance and generalizability.

Technology Category

Application Category

📝 Abstract
Vision-Language Foundation Models (VLFM) have shown a tremendous increase in performance in terms of generating high-resolution, photorealistic natural images. While VLFMs show a rich understanding of semantic content across modalities, they often struggle with fine-grained alignment tasks that require precise correspondence between image regions and textual descriptions a limitation in medical imaging, where accurate localization and detection of clinical features are essential for diagnosis and analysis. To address this issue, we propose a multi-stage architecture where a pre-trained VLFM provides a cursory semantic understanding, while a reinforcement learning (RL) algorithm refines the alignment through an iterative process that optimizes for understanding semantic context. The reward signal is designed to align the semantic information of the text with synthesized images. We demonstrate the effectiveness of our method on a medical imaging skin dataset where the generated images exhibit improved generation quality and alignment with prompt over the fine-tuned Stable Diffusion. We also show that the synthesized samples could be used to improve disease classifier performance for underrepresented subgroups through augmentation.
Problem

Research questions and friction points this paper is trying to address.

Improves fine-grained alignment in medical image generation.
Enhances semantic context understanding using reinforcement learning.
Augments underrepresented subgroups to improve disease classifier performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning refines image-text alignment.
Multi-stage architecture enhances medical image generation.
Improved disease classifier via synthetic image augmentation.
🔎 Similar Papers
No similar papers found.
Parham Saremi
Parham Saremi
ECE student at McGill
Machine LearningMedical ImagingComputer VisionGenerative Modeling
A
Amar Kumar
Center for Intelligent Machines, McGill University; MILA (Quebec AI institute)
M
Mohammed Mohammed
Center for Intelligent Machines, McGill University; MILA (Quebec AI institute)
Z
Zahra Tehraninasab
Center for Intelligent Machines, McGill University; MILA (Quebec AI institute)
Tal Arbel
Tal Arbel
Professor of Electrical & Computer Engineering, McGill University
Computer VisionMedical Imaging