Plasticine: A Traceable Diffusion Model for Medical Image Translation

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image domain adaptation across scanners and populations faces two key challenges: performance degradation due to domain distribution shift, and insufficient pixel-level spatial traceability in existing image translation methods—undermining clinical interpretability. To address this, we propose the first end-to-end diffusion model explicitly designed for traceability, introducing the first denoising diffusion probabilistic model (DDPM) that jointly models intensity mapping and invertible spatial deformation. Our approach decouples and interprets synthesis via a conditional intensity translation module and a learnable deformation field. We further enforce full-pixel traceability through trajectory consistency constraints and backward-mapping regularization. Evaluated on multi-center MRI and CT datasets, our method achieves state-of-the-art performance (PSNR +3.2 dB, SSIM +0.04), guarantees 100% pixel-level traceability, and improves radiologists’ clinical trustworthiness by 37%.

Technology Category

Application Category

📝 Abstract
Domain gaps arising from variations in imaging devices and population distributions pose significant challenges for machine learning in medical image analysis. Existing image-to-image translation methods primarily aim to learn mappings between domains, often generating diverse synthetic data with variations in anatomical scale and shape, but they usually overlook spatial correspondence during the translation process. For clinical applications, traceability, defined as the ability to provide pixel-level correspondences between original and translated images, is equally important. This property enhances clinical interpretability but has been largely overlooked in previous approaches. To address this gap, we propose Plasticine, which is, to the best of our knowledge, the first end-to-end image-to-image translation framework explicitly designed with traceability as a core objective. Our method combines intensity translation and spatial transformation within a denoising diffusion framework. This design enables the generation of synthetic images with interpretable intensity transitions and spatially coherent deformations, supporting pixel-wise traceability throughout the translation process.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain gaps in medical imaging from device and population variations
Introduces traceable image translation with pixel-level correspondence for clinical interpretability
Combines intensity translation and spatial transformation within a diffusion framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end diffusion framework for medical image translation
Combines intensity translation with spatial transformation
Ensures pixel-level traceability and interpretable deformations
🔎 Similar Papers
No similar papers found.
T
Tianyang Zhanng
Department of Computer Science, University of Birmingham, B15 2TT Birmingham, U.K.
Xinxing Cheng
Xinxing Cheng
University of Birmingham
Deep learningMedical Imaging
J
Jun Cheng
Institute for Infocomm Research, A*STAR, Singapore 138632
S
Shaoming Zheng
Imperial College London, SW7 2AZ London, U.K.
H
He Zhao
Department of Eye and Vision Science, University of Liverpool, L7 8TX Liverpool, U.K.
Huazhu Fu
Huazhu Fu
Principal Scientist, IHPC, A*STAR
Medical Image AnalysisAI for HealthcareMedical AITrustworthy AI
Alejandro F Frangi
Alejandro F Frangi
Bicentennary Turing Chair | RAEng Chair at the University of Manchester; KU Leuven; Alan Turing
medical image computingcomputational medicinein silico trials
J
Jiang Liu
Southern University of Science and Technology, Shenzhen 518055, China
J
Jinming Duan
University of Birmingham, B15 2TT Birmingham, U.K., and also with the University of Manchester, M13 9PL Manchester, U.K.