🤖 AI Summary
To address insufficient fine-grained text-image alignment in medical image generation—limiting clinical diagnostic utility—this paper proposes a multi-stage controllable reinforcement learning (RL) framework. Leveraging a vision-language foundation model (VLFM) for semantic priors and Stable Diffusion as the generative backbone, we design a clinically oriented semantic alignment reward function and iteratively optimize region-level text-image matching via the DDPO policy gradient algorithm. This work is the first to introduce controllable RL into medical diffusion-based generation, enabling diverse clinical prompt conditioning and subgroup-aware data augmentation. Evaluated on a dermatological imaging dataset, our method achieves a 21.3% reduction in FID and an 18.6% improvement in CLIP-Score over baselines; clinical expert assessments confirm statistically significant superiority over fine-tuning approaches. Furthermore, downstream rare-disease classifier accuracy improves by 12.7%, demonstrating enhanced diagnostic relevance and generalizability.
📝 Abstract
Vision-Language Foundation Models (VLFM) have shown a tremendous increase in performance in terms of generating high-resolution, photorealistic natural images. While VLFMs show a rich understanding of semantic content across modalities, they often struggle with fine-grained alignment tasks that require precise correspondence between image regions and textual descriptions a limitation in medical imaging, where accurate localization and detection of clinical features are essential for diagnosis and analysis. To address this issue, we propose a multi-stage architecture where a pre-trained VLFM provides a cursory semantic understanding, while a reinforcement learning (RL) algorithm refines the alignment through an iterative process that optimizes for understanding semantic context. The reward signal is designed to align the semantic information of the text with synthesized images. We demonstrate the effectiveness of our method on a medical imaging skin dataset where the generated images exhibit improved generation quality and alignment with prompt over the fine-tuned Stable Diffusion. We also show that the synthesized samples could be used to improve disease classifier performance for underrepresented subgroups through augmentation.