Decoupling Multi-Contrast Super-Resolution: Pairing Unpaired Synthesis with Implicit Representations

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
MRI multi-contrast super-resolution (MCSR) faces two key clinical bottlenecks: heavy reliance on large quantities of paired low-resolution (LR)–high-resolution (HR) training data and inflexibility to fixed upscaling factors. To address these, we propose a decoupled two-stage framework. First, unpaired cross-modal image translation synthesizes high-resolution reference images of the target contrast. Second, coordinate-conditioned implicit neural representations (INRs) enable unsupervised, scale-agnostic super-resolution reconstruction. Critically, our method requires no paired LR–HR data and supports arbitrary upscaling factors while preserving anatomical consistency. Quantitative and qualitative evaluations on 4× and 8× MCSR tasks demonstrate substantial improvements in image fidelity and anatomical accuracy over state-of-the-art methods. The results validate its data efficiency and scalability in realistic clinical settings, where acquiring paired multi-contrast HR data remains prohibitively expensive and time-consuming.

Technology Category

Application Category

📝 Abstract
Magnetic Resonance Imaging (MRI) is critical for clinical diagnostics but is often limited by long acquisition times and low signal-to-noise ratios, especially in modalities like diffusion and functional MRI. The multi-contrast nature of MRI presents a valuable opportunity for cross-modal enhancement, where high-resolution (HR) modalities can serve as references to boost the quality of their low-resolution (LR) counterparts-motivating the development of Multi-Contrast Super-Resolution (MCSR) techniques. Prior work has shown that leveraging complementary contrasts can improve SR performance; however, effective feature extraction and fusion across modalities with varying resolutions remains a major challenge. Moreover, existing MCSR methods often assume fixed resolution settings and all require large, perfectly paired training datasets-conditions rarely met in real-world clinical environments. To address these challenges, we propose a novel Modular Multi-Contrast Super-Resolution (MCSR) framework that eliminates the need for paired training data and supports arbitrary upscaling. Our method decouples the MCSR task into two stages: (1) Unpaired Cross-Modal Synthesis (U-CMS), which translates a high-resolution reference modality into a synthesized version of the target contrast, and (2) Unsupervised Super-Resolution (U-SR), which reconstructs the final output using implicit neural representations (INRs) conditioned on spatial coordinates. This design enables scale-agnostic and anatomically faithful reconstruction by bridging un-paired cross-modal synthesis with unsupervised resolution enhancement. Experiments show that our method achieves superior performance at 4x and 8x upscaling, with improved fidelity and anatomical consistency over existing baselines. Our framework demonstrates strong potential for scalable, subject-specific, and data-efficient MCSR in real-world clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Enhancing low-resolution MRI using high-resolution multi-contrast references
Overcoming unpaired data limitations in multi-contrast super-resolution
Enabling arbitrary upscaling with implicit neural representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples MCSR into unpaired synthesis and super-resolution
Uses implicit neural representations for scale-agnostic reconstruction
Eliminates need for paired training data
🔎 Similar Papers
No similar papers found.
H
Hongyu Rui
Department of Bioengineering and I-X, Imperial College London, London, UK
Yinzhe Wu
Yinzhe Wu
Imperial College London
Fanwen Wang
Fanwen Wang
Imperial College London
Medical imagingMRI reconstructionImage registration
J
Jiahao Huang
Department of Bioengineering and I-X, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
Liutao Yang
Liutao Yang
Imperial College London
Medical Image ReconstructionComputer VisionMachine Learning
Z
Zi Wang
Department of Bioengineering and I-X, Imperial College London, London, UK
G
Guang Yang
Department of Bioengineering and I-X, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK