Faster, Self-Supervised Super-Resolution for Anisotropic Multi-View MRI Using a Sparse Coordinate Loss

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
MRI low-resolution (LR) orthogonal anisotropic scans suffer from blurred anatomical details, while conventional single-view analysis is time-consuming and error-prone. To address this without requiring high-resolution (HR) ground-truth labels, we propose a self-supervised multi-view neural network framework for end-to-end super-resolution reconstruction. Our key contributions are: (1) a sparse coordinate loss function enabling flexible LR image fusion under arbitrary scaling factors; and (2) a two-stage strategy—“generic offline model + personalized online optimization”—balancing generalizability and subject-specific adaptability. Evaluated on multiple independent datasets, our method matches or surpasses state-of-the-art self-supervised approaches in reconstruction fidelity. Moreover, personalized inference accelerates reconstruction by up to 10×, while preserving high-fidelity anatomical structures critical for clinical interpretation.

Technology Category

Application Category

📝 Abstract
Acquiring images in high resolution is often a challenging task. Especially in the medical sector, image quality has to be balanced with acquisition time and patient comfort. To strike a compromise between scan time and quality for Magnetic Resonance (MR) imaging, two anisotropic scans with different low-resolution (LR) orientations can be acquired. Typically, LR scans are analyzed individually by radiologists, which is time consuming and can lead to inaccurate interpretation. To tackle this, we propose a novel approach for fusing two orthogonal anisotropic LR MR images to reconstruct anatomical details in a unified representation. Our multi-view neural network is trained in a self-supervised manner, without requiring corresponding high-resolution (HR) data. To optimize the model, we introduce a sparse coordinate-based loss, enabling the integration of LR images with arbitrary scaling. We evaluate our method on MR images from two independent cohorts. Our results demonstrate comparable or even improved super-resolution (SR) performance compared to state-of-the-art (SOTA) self-supervised SR methods for different upsampling scales. By combining a patient-agnostic offline and a patient-specific online phase, we achieve a substantial speed-up of up to ten times for patient-specific reconstruction while achieving similar or better SR quality. Code is available at https://github.com/MajaSchle/tripleSR.
Problem

Research questions and friction points this paper is trying to address.

Fusing two orthogonal anisotropic low-resolution MR images
Reconstructing anatomical details without high-resolution data
Achieving faster super-resolution with self-supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised multi-view network fuses orthogonal MRI scans
Sparse coordinate loss enables arbitrary scaling integration
Combines offline and online phases for tenfold speedup
🔎 Similar Papers
No similar papers found.
M
Maja Schlereth
Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
M
Moritz Schillinger
Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
Katharina Breininger
Katharina Breininger
Center for AI and Datascience, Julius-Maximilians-Universität Würzburg
Machine LearningMedical ImagingIntraoperative ImagingImage GuidanceDeformation Modelling