TraceTrans: Translation and Spatial Tracing for Surgical Prediction

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-to-image translation methods often neglect spatial correspondence between source and generated images, leading to anatomical distortions and hallucinations that severely compromise reliability and interpretability in clinical postoperative prediction. To address this, we propose the first framework integrating deformable image translation with explicit spatial correspondence modeling. Our approach employs a shared encoder for feature extraction and dual decoders that jointly predict a deformation field and the target image, with the deformation field serving as a geometric constraint to enforce spatial alignment. This design ensures anatomical consistency while preserving target distribution matching. Evaluated on medical aesthetics and brain MRI datasets, our model significantly suppresses structural distortions and achieves high-fidelity, interpretable postoperative image prediction. Quantitative and qualitative results demonstrate superior clinical applicability and robustness compared to state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Image-to-image translation models have achieved notable success in converting images across visual domains and are increasingly used for medical tasks such as predicting post-operative outcomes and modeling disease progression. However, most existing methods primarily aim to match the target distribution and often neglect spatial correspondences between the source and translated images. This limitation can lead to structural inconsistencies and hallucinations, undermining the reliability and interpretability of the predictions. These challenges are accentuated in clinical applications by the stringent requirement for anatomical accuracy. In this work, we present TraceTrans, a novel deformable image translation model designed for post-operative prediction that generates images aligned with the target distribution while explicitly revealing spatial correspondences with the pre-operative input. The framework employs an encoder for feature extraction and dual decoders for predicting spatial deformations and synthesizing the translated image. The predicted deformation field imposes spatial constraints on the generated output, ensuring anatomical consistency with the source. Extensive experiments on medical cosmetology and brain MRI datasets demonstrate that TraceTrans delivers accurate and interpretable post-operative predictions, highlighting its potential for reliable clinical deployment.
Problem

Research questions and friction points this paper is trying to address.

Predicting post-operative outcomes with anatomical accuracy
Ensuring spatial correspondence between pre- and post-operative images
Reducing structural inconsistencies in medical image translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deformable image translation model for post-operative prediction
Dual decoders predict spatial deformations and synthesize images
Predicted deformation field ensures anatomical consistency with source
🔎 Similar Papers
No similar papers found.
X
Xiyu Luo
Southern University of Science and Technology
H
Haodong LI
Southern University of Science and Technology
Xinxing Cheng
Xinxing Cheng
University of Birmingham
Deep learningMedical Imaging
H
He Zhao
University of Liverpool
Y
Yang Hu
University of Oxford
X
Xuan Song
Jilin University
T
Tianyang Zhang
University of Oxford