🤖 AI Summary
Current video Transformers for echocardiogram analysis are susceptible to background interference and prone to learning spurious correlations. To address this, we propose ViACT—a novel anatomically guided video Transformer framework that, for the first time, embeds anatomical priors as myocardial point sets into the Transformer architecture. ViACT integrates masked autoencoding with joint geometric–image patch encoding to explicitly steer attention toward myocardial regions. This design enables implicit, end-to-end myocardial point tracking without handcrafted modules and yields pathology-aligned, interpretable attention maps. Evaluated on left ventricular ejection fraction regression and cardiac amyloidosis detection, ViACT significantly outperforms state-of-the-art baselines; its attention heatmaps precisely localize pathologically relevant anatomical regions. Moreover, the model demonstrates strong generalizability, transferring seamlessly to myocardial motion tracking. The core innovations lie in (i) anatomical prior embedding via learnable myocardial point-set modeling and (ii) an interpretability-driven video Transformer architecture that jointly optimizes representation learning and clinical plausibility.
📝 Abstract
Video transformers have recently demonstrated strong potential for echocardiogram (echo) analysis, leveraging self-supervised pre-training and flexible adaptation across diverse tasks. However, like other models operating on videos, they are prone to learning spurious correlations from non-diagnostic regions such as image backgrounds. To overcome this limitation, we propose the Video Anatomically Constrained Transformer (ViACT), a novel framework that integrates anatomical priors directly into the transformer architecture. ViACT represents a deforming anatomical structure as a point set and encodes both its spatial geometry and corresponding image patches into transformer tokens. During pre-training, ViACT follows a masked autoencoding strategy that masks and reconstructs only anatomical patches, enforcing that representation learning is focused on the anatomical region. The pre-trained model can then be fine-tuned for tasks localized to this region. In this work we focus on the myocardium, demonstrating the framework on echo analysis tasks such as left ventricular ejection fraction (EF) regression and cardiac amyloidosis (CA) detection. The anatomical constraint focuses transformer attention within the myocardium, yielding interpretable attention maps aligned with regions of known CA pathology. Moreover, ViACT generalizes to myocardium point tracking without requiring task-specific components such as correlation volumes used in specialized tracking networks.