π€ AI Summary
This work addresses the challenge in tagged MRI where anatomical structures, tags, and motion are highly coupled, further complicated by tag decay and imaging blur, which hinder accurate segmentation and motion tracking. The authors propose the first nonlinear blind inverse framework that integrates MR physical modeling with deep generative priors to jointly recover high-resolution anatomical images, synthesize dynamic cine sequences, and estimate continuous 3D Lagrangian motion fieldsβall in an unsupervised manner. By leveraging differentiable optimization for blind inversion and 3D diffeomorphic motion estimation, the method significantly enhances anatomical clarity, dynamic image quality, and motion estimation accuracy on brain tagged MRI, outperforming existing specialized approaches.
π Abstract
Tagged MRI enables tracking internal tissue motion non-invasively. It encodes motion by modulating anatomy with periodic tags, which deform along with tissue. However, the entanglement between anatomy, tags and motion poses significant challenges for post-processing. The existence of tags and imaging blur hinders downstream tasks such as segmenting anatomy. Tag fading, due to T1-relaxation, disrupts the brightness constancy assumption for motion tracking. For decades, these challenges have been handled in isolation and sub-optimally. In contrast, we introduce a blind and nonlinear inverse framework for tagged MRI that, for the first time, unifies these tasks: anatomical image recovery, high-resolution cine image synthesis, and motion estimation. At its core, the synergy of MR physics and generative priors enables us to blindly estimate the unknown forward imaging models and high-resolution underlying anatomy, while simultaneously tracking 3D diffeomorphic Lagrangian motion over time. Experiments on tagged brain MRI demonstrate that our approach yields high-resolution anatomy images, cine images, and more accurate motion than specialized methods.