CD-DPE: Dual-Prompt Expert Network based on Convolutional Dictionary Feature Decoupling for Multi-Contrast MRI Super-Resolution

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-contrast MRI super-resolution, insufficient utilization of textural information from reference images arises due to inter-modal contrast discrepancies. Method: This paper proposes a dual-prompt expert network with convolutional dictionary-based feature disentanglement. Specifically, we design a Convolutional Dictionary Feature Disentanglement Module (CD-FDM) to separate cross-contrast and intra-contrast features, and a Dual-Prompt Feature Fusion Expert Module (DP-FFEM) that incorporates frequency-guided and adaptive routing prompts for dynamic, selective feature integration. Results: Extensive experiments on public multi-contrast MRI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in reconstructing fine anatomical details. Moreover, it exhibits strong generalization and robustness under unseen scanning protocols and hardware configurations, establishing a novel paradigm for clinical multi-contrast MRI high-resolution reconstruction.

Technology Category

Application Category

📝 Abstract
Multi-contrast magnetic resonance imaging (MRI) super-resolution intends to reconstruct high-resolution (HR) images from low-resolution (LR) scans by leveraging structural information present in HR reference images acquired with different contrasts. This technique enhances anatomical detail and soft tissue differentiation, which is vital for early diagnosis and clinical decision-making. However, inherent contrasts disparities between modalities pose fundamental challenges in effectively utilizing reference image textures to guide target image reconstruction, often resulting in suboptimal feature integration. To address this issue, we propose a dual-prompt expert network based on a convolutional dictionary feature decoupling (CD-DPE) strategy for multi-contrast MRI super-resolution. Specifically, we introduce an iterative convolutional dictionary feature decoupling module (CD-FDM) to separate features into cross-contrast and intra-contrast components, thereby reducing redundancy and interference. To fully integrate these features, a novel dual-prompt feature fusion expert module (DP-FFEM) is proposed. This module uses a frequency prompt to guide the selection of relevant reference features for incorporation into the target image, while an adaptive routing prompt determines the optimal method for fusing reference and target features to enhance reconstruction quality. Extensive experiments on public multi-contrast MRI datasets demonstrate that CD-DPE outperforms state-of-the-art methods in reconstructing fine details. Additionally, experiments on unseen datasets demonstrated that CD-DPE exhibits strong generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing high-resolution MRI images from low-resolution multi-contrast scans
Addressing contrast disparities between modalities for effective feature integration
Enhancing anatomical detail and tissue differentiation for clinical diagnosis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convolutional dictionary decouples cross-contrast MRI features
Dual-prompt module fuses frequency and routing guidance
Adaptive routing optimizes reference-target feature integration
🔎 Similar Papers
No similar papers found.
X
Xianming Gu
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, College of Computer Science and Technology, Guizhou University, Guiyang, China
Lihui Wang
Lihui Wang
Chair Professor of Sustainable Manufacturing, KTH
AI in manufacturinghuman-robot collaborationsmart manufacturing systems
Y
Ying Cao
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, College of Computer Science and Technology, Guizhou University, Guiyang, China
Z
Zeyu Deng
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, College of Computer Science and Technology, Guizhou University, Guiyang, China
Y
Yingfeng Ou
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, College of Computer Science and Technology, Guizhou University, Guiyang, China
G
Guodong Hu
Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, College of Computer Science and Technology, Guizhou University, Guiyang, China
Y
Yi Chen
The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, 6200 MD Maastricht, the Netherlands