🤖 AI Summary
Head motion during brain PET imaging induces severe image artifacts and quantitative biases, compromising the diagnosis of neurological disorders. To address this, we propose DL-HMC++, a hardware-free, data-driven motion correction framework based on supervised deep learning and cross-attention mechanisms—marking the first application of cross-attention to rigid-body motion estimation in PET. DL-HMC++ predicts high-fidelity motion trajectories directly from sub-second 3D sinogram data. Evaluated across multi-center, multi-tracer datasets (HRRT and Biograph mCT), it achieves a mean SUV error of <1.2%, with corrected image quality matching that of hardware-based motion tracking (HMT)—the clinical gold standard—and substantially outperforming existing data-driven approaches. The method demonstrates strong generalizability across scanners and tracers, and its open-source implementation facilitates clinical translation.
📝 Abstract
Head movement poses a significant challenge in brain positron emission tomography (PET) imaging, resulting in image artifacts and tracer uptake quantification inaccuracies. Effective head motion estimation and correction are crucial for precise quantitative image analysis and accurate diagnosis of neurological disorders. Hardware-based motion tracking (HMT) has limited applicability in real-world clinical practice. To overcome this limitation, we propose a deep-learning head motion correction approach with cross-attention (DL-HMC++) to predict rigid head motion from one-second 3D PET raw data. DL-HMC++ is trained in a supervised manner by leveraging existing dynamic PET scans with gold-standard motion measurements from external HMT. We evaluate DL-HMC++ on two PET scanners (HRRT and mCT) and four radiotracers (18F-FDG, 18F-FPEB, 11C-UCB-J, and 11C-LSN3172176) to demonstrate the effectiveness and generalization of the approach in large cohort PET studies. Quantitative and qualitative results demonstrate that DL-HMC++ consistently outperforms state-of-the-art data-driven motion estimation methods, producing motion-free images with clear delineation of brain structures and reduced motion artifacts that are indistinguishable from gold-standard HMT. Brain region of interest standard uptake value analysis exhibits average difference ratios between DL-HMC++ and gold-standard HMT to be 1.2±0.5% for HRRT and 0.5±0.2% for mCT. DL-HMC++ demonstrates the potential for data-driven PET head motion correction to remove the burden of HMT, making motion correction accessible to clinical populations beyond research settings. The code is available at https://github.com/maxxxxxxcai/DL-HMC-TMI.