🤖 AI Summary
To address insufficient spatiotemporal modeling of dynamic ultrasound (US) cine clips in thyroid nodule malignancy classification—leading to unnecessary benign nodule biopsies—this paper proposes STACT-Time, the first model integrating self-attention and cross-attention mechanisms to jointly encode temporal US features and nodule mask–guided features derived from a pretrained segmentation model. This multimodal spatiotemporal representation learning framework enhances collaborative modeling of local nodule structure and motion patterns. Evaluated via cross-validation, STACT-Time achieves a precision of 0.91 and an F1 score of 0.89, outperforming existing methods. It maintains high sensitivity while significantly improving specificity, thereby reducing the rate of unnecessary fine-needle aspiration (FNA) procedures.
📝 Abstract
Thyroid cancer is among the most common cancers in the United States. Thyroid nodules are frequently detected through ultrasound (US) imaging, and some require further evaluation via fine-needle aspiration (FNA) biopsy. Despite its effectiveness, FNA often leads to unnecessary biopsies of benign nodules, causing patient discomfort and anxiety. To address this, the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) has been developed to reduce benign biopsies. However, such systems are limited by interobserver variability. Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips, which contain dynamic global information and surrounding structural changes across various views. In this work, we propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model, a novel representation learning framework that integrates imaging features from US cine clips with features from segmentation masks automatically generated by a pretrained model. By leveraging self-attention and cross-attention mechanisms, our model captures the rich temporal and spatial context of US cine clips while enhancing feature representation through segmentation-guided learning. Our model improves malignancy prediction compared to state-of-the-art models, achieving a cross-validation precision of 0.91 (plus or minus 0.02) and an F1 score of 0.89 (plus or minus 0.02). By reducing unnecessary biopsies of benign nodules while maintaining high sensitivity for malignancy detection, our model has the potential to enhance clinical decision-making and improve patient outcomes.