🤖 AI Summary
Tracking 4D cardiac motion is challenging due to the high homogeneity and lack of distinctive anatomical landmarks in myocardial tissue. Method: We propose Dyna3DGR, a self-supervised dynamic 3D Gaussian representation framework that jointly optimizes cardiac anatomy and continuous motion fields. It innovatively integrates explicit 3D Gaussian point clouds—ensuring topological consistency and geometric interpretability—with implicit neural motion fields for high-fidelity deformation modeling. Optimization is performed end-to-end via differentiable volumetric rendering, eliminating the need for point-wise annotations or large-scale labeled datasets. Results: Evaluated on the ACDC dataset, Dyna3DGR significantly outperforms existing differentiable registration methods, achieving state-of-the-art performance in both motion trajectory accuracy and structural stability. This demonstrates its robustness and precision for 4D tracking in low-texture medical images.
📝 Abstract
Accurate analysis of cardiac motion is crucial for evaluating cardiac function. While dynamic cardiac magnetic resonance imaging (CMR) can capture detailed tissue motion throughout the cardiac cycle, the fine-grained 4D cardiac motion tracking remains challenging due to the homogeneous nature of myocardial tissue and the lack of distinctive features. Existing approaches can be broadly categorized into image based and representation-based, each with its limitations. Image-based methods, including both raditional and deep learning-based registration approaches, either struggle with topological consistency or rely heavily on extensive training data. Representation-based methods, while promising, often suffer from loss of image-level details. To address these limitations, we propose Dynamic 3D Gaussian Representation (Dyna3DGR), a novel framework that combines explicit 3D Gaussian representation with implicit neural motion field modeling. Our method simultaneously optimizes cardiac structure and motion in a self-supervised manner, eliminating the need for extensive training data or point-to-point correspondences. Through differentiable volumetric rendering, Dyna3DGR efficiently bridges continuous motion representation with image-space alignment while preserving both topological and temporal consistency. Comprehensive evaluations on the ACDC dataset demonstrate that our approach surpasses state-of-the-art deep learning-based diffeomorphic registration methods in tracking accuracy. The code will be available in https://github.com/windrise/Dyna3DGR.