Towards Globally Predictable k-Space Interpolation: A White-box Transformer Approach

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing k-space interpolation methods predominantly rely on local predictability while neglecting global structural constraints; moreover, deep learning models—particularly CNNs—lack interpretability, limiting clinical trustworthiness. This work proposes the first white-box Transformer framework for accelerated MRI, reformulating k-space interpolation as a globally predictable, structured low-rank inverse problem. By deriving an interpretable attention mechanism from subgradient analysis, the method tightly couples annihilating filter priors with unrolled optimization, yielding a cascaded architecture that achieves both high reconstruction accuracy and algorithmic transparency. Experiments demonstrate substantial improvements over state-of-the-art methods in interpolation fidelity, while the physically grounded attention weights provide explicit, interpretable insights into data-driven global dependencies. This validates that explicit modeling of long-range k-space correlations enhances both reconstruction reliability and clinical applicability.

Technology Category

Application Category

📝 Abstract
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.
Problem

Research questions and friction points this paper is trying to address.

Interpolating missing k-space data for faster MRI imaging
Addressing lack of global dependency in existing methods
Improving interpretability of Transformer-based k-space interpolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

White-box Transformer for k-space interpolation
Globally Predictable Interpolation with SLR model
Learnable attention via annihilation filters
🔎 Similar Papers
No similar papers found.
C
Chen Luo
School of Mathematical Sciences, Inner Mongolia University, China
Qiyu Jin
Qiyu Jin
Inner Mongolia University
denoisingdeconvolutionMRI reconstructionImage Processingelectron microscopy
T
Taofeng Xie
College of Computer and Information, Inner Mongolia Medical University, China
Xuemei Wang
Xuemei Wang
Inner Mongolia Medical University Affiliated Hospital, China
Huayu Wang
Huayu Wang
University of Washington
Computer VisionMedical ImageHuman Intelligence
C
Congcong Liu
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
L
Liming Tang
School of Mathematics and Statistics, Hubei Minzu University, China
Guoqing Chen
Guoqing Chen
SMTS at AMD
Low power circuit and architecturem3D ICclock/power networks
Zhuo-Xu Cui
Zhuo-Xu Cui
Associate Professor, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
MRIInverse ProblemsDeep LearningGenerative Models
D
Dong Liang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, China