See More, Change Less: Anatomy-Aware Diffusion for Contrast Enhancement

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical image contrast enhancement methods often distort anatomical structures, introduce artifacts, or miss small lesions due to the absence of explicit anatomical structure modeling and contrast dynamics characterization. To address this, we propose a structure-aware diffusion model that jointly incorporates anatomical priors and multi-phase relative contrast dynamics modeling, enabling selective enhancement of clinically critical regions while preserving anatomical fidelity—achieved via a registration-free end-to-end training paradigm. Key innovations include a structure-aware supervised loss, a registration-free multi-phase learning framework, and a unified inference scheme. Evaluated on six external CT datasets, our method significantly outperforms state-of-the-art approaches: SSIM increases by 14.2%, PSNR by 20.6%, and FID decreases by 50%; notably, cancer detection F1-score improves by 10% on non-contrast-enhanced CT scans. The method thus achieves superior visual quality and diagnostic utility simultaneously.

Technology Category

Application Category

📝 Abstract
Image enhancement improves visual quality and helps reveal details that are hard to see in the original image. In medical imaging, it can support clinical decision-making, but current models often over-edit. This can distort organs, create false findings, and miss small tumors because these models do not understand anatomy or contrast dynamics. We propose SMILE, an anatomy-aware diffusion model that learns how organs are shaped and how they take up contrast. It enhances only clinically relevant regions while leaving all other areas unchanged. SMILE introduces three key ideas: (1) structure-aware supervision that follows true organ boundaries and contrast patterns; (2) registration-free learning that works directly with unaligned multi-phase CT scans; (3) unified inference that provides fast and consistent enhancement across all contrast phases. Across six external datasets, SMILE outperforms existing methods in image quality (14.2% higher SSIM, 20.6% higher PSNR, 50% better FID) and in clinical usefulness by producing anatomically accurate and diagnostically meaningful images. SMILE also improves cancer detection from non-contrast CT, raising the F1 score by up to 10 percent.
Problem

Research questions and friction points this paper is trying to address.

Enhances medical images without distorting anatomical structures
Improves contrast in clinically relevant regions only
Boosts cancer detection accuracy from non-contrast CT scans
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anatomy-aware diffusion model for medical contrast enhancement
Registration-free learning from unaligned multi-phase CT scans
Unified inference for fast, consistent enhancement across phases
🔎 Similar Papers
No similar papers found.
J
Junqi Liu
Johns Hopkins University
Z
Zejun Wu
Johns Hopkins University
P
Pedro R. A. S. Bassi
Johns Hopkins University
X
Xinze Zhou
Johns Hopkins University
Wenxuan Li
Wenxuan Li
Johns Hopkins University
Imaging InformaticsComputer-aided Diagnosis
I
Ibrahim E. Hamamci
University of Zurich
S
Sezgin Er
Istanbul Medipol University
Tianyu Lin
Tianyu Lin
Johns Hopkins University
Medical Image AnalysisComputer Vision
Y
Yi Luo
Johns Hopkins University
Szymon Płotka
Szymon Płotka
Jagiellonian University
Machine LearningDeep LearningComputer VisionMedical Imaging
Bjoern Menze
Bjoern Menze
Universität Zürich
Biomedical Image AnalysisMedical Image AnalysisMedical Image ComputingMachine Learning
Daguang Xu
Daguang Xu
Senior Research Manager at NVIDIA
Deep LearningMachine LearningMedical Image AnalysisCompressive SensingSparse coding
K
Kai Ding
Johns Hopkins Medicine
K
Kang Wang
University of California, San Francisco
Y
Yang Yang
University of California, San Francisco
Yucheng Tang
Yucheng Tang
Sr. Research Scientist at NVIDIA
3D Computer VisionVision-Language ModelHealthcare AIAccelerated Computing
A
Alan L. Yuille
Johns Hopkins University
Z
Zongwei Zhou
Johns Hopkins University