🤖 AI Summary
Medical imaging AI confronts dual challenges of data scarcity and privacy constraints; existing generative models struggle to efficiently synthesize high-dimensional 3D medical images in resource-limited clinical settings, and their evaluation overly relies on quantitative metrics, lacking clinically interpretable validation. This paper proposes MedLoRD, a lightweight diffusion model built upon an enhanced conditional diffusion framework. It integrates an efficient 3D U-Net backbone, memory-optimized sampling, and multimodal anatomical prior guidance—enabling end-to-end generation of high-resolution CT volumes (512×512×256 voxels) using only 24 GB VRAM. To our knowledge, this is the first work achieving high-fidelity 3D volumetric synthesis under stringent hardware constraints. We further introduce clinician-based assessment, regional volume analysis, and conditional mask consistency as novel clinical interpretability metrics, jointly evaluated with downstream segmentation tasks. On coronary CTA and lung CT benchmarks, MedLoRD surpasses state-of-the-art methods in image fidelity, anatomical plausibility, and task performance—reaching clinically deployable standards.
📝 Abstract
Advancements in AI for medical imaging offer significant potential. However, their applications are constrained by the limited availability of data and the reluctance of medical centers to share it due to patient privacy concerns. Generative models present a promising solution by creating synthetic data as a substitute for real patient data. However, medical images are typically high-dimensional, and current state-of-the-art methods are often impractical for computational resource-constrained healthcare environments. These models rely on data sub-sampling, raising doubts about their feasibility and real-world applicability. Furthermore, many of these models are evaluated on quantitative metrics that alone can be misleading in assessing the image quality and clinical meaningfulness of the generated images. To address this, we introduce MedLoRD, a generative diffusion model designed for computational resource-constrained environments. MedLoRD is capable of generating high-dimensional medical volumes with resolutions up to 512$ imes$512$ imes$256, utilizing GPUs with only 24GB VRAM, which are commonly found in standard desktop workstations. MedLoRD is evaluated across multiple modalities, including Coronary Computed Tomography Angiography and Lung Computed Tomography datasets. Extensive evaluations through radiological evaluation, relative regional volume analysis, adherence to conditional masks, and downstream tasks show that MedLoRD generates high-fidelity images closely adhering to segmentation mask conditions, surpassing the capabilities of current state-of-the-art generative models for medical image synthesis in computational resource-constrained environments.