🤖 AI Summary
This work addresses the misalignment between training objectives of 3D diffusion models and clinical requirements in medical image generation by proposing a reinforcement learning–based multi-scale reward fine-tuning framework. Starting from a 3D diffusion model pre-trained on MRI volumes, the method employs Proximal Policy Optimization (PPO) to fine-tune the generator, integrating multi-scale feedback signals from both 2D slices and 3D voxels to jointly optimize local texture realism and global anatomical consistency. Experiments on the BraTS 2019 and OASIS-1 datasets demonstrate that the proposed approach significantly reduces the Fréchet Inception Distance (FID) and produces clinically plausible synthetic images, thereby enhancing performance on downstream tasks such as tumor and disease classification.
📝 Abstract
Diffusion models have emerged as powerful tools for 3D medical image generation, yet bridging the gap between standard training objectives and clinical relevance remains a challenge. This paper presents a method to enhance 3D diffusion models using Reinforcement Learning (RL) with multi-scale feedback. We first pretrain a 3D diffusion model on MRI volumes to establish a robust generative prior. Subsequently, we fine-tune the model using Proximal Policy Optimization (PPO), guided by a novel reward system that integrates both 2D slice-wise assessments and 3D volumetric analysis. This combination allows the model to simultaneously optimize for local texture details and global structural coherence. We validate our framework on the BraTS 2019 and OASIS-1 datasets. Our results indicate that incorporating RL feedback effectively steers the generation process toward higher quality distributions. Quantitative analysis reveals significant improvements in Fr\'echet Inception Distance (FID) and, crucially, the synthetic data demonstrates enhanced utility in downstream tumor and disease classification tasks compared to non-optimized baselines.