🤖 AI Summary
Existing k-space interpolation methods predominantly rely on local predictability while neglecting global structural constraints; moreover, deep learning models—particularly CNNs—lack interpretability, limiting clinical trustworthiness. This work proposes the first white-box Transformer framework for accelerated MRI, reformulating k-space interpolation as a globally predictable, structured low-rank inverse problem. By deriving an interpretable attention mechanism from subgradient analysis, the method tightly couples annihilating filter priors with unrolled optimization, yielding a cascaded architecture that achieves both high reconstruction accuracy and algorithmic transparency. Experiments demonstrate substantial improvements over state-of-the-art methods in interpolation fidelity, while the physically grounded attention weights provide explicit, interpretable insights into data-driven global dependencies. This validates that explicit modeling of long-range k-space correlations enhances both reconstruction reliability and clinical applicability.
📝 Abstract
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.