STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient spatiotemporal modeling of dynamic ultrasound (US) cine clips in thyroid nodule malignancy classification—leading to unnecessary benign nodule biopsies—this paper proposes STACT-Time, the first model integrating self-attention and cross-attention mechanisms to jointly encode temporal US features and nodule mask–guided features derived from a pretrained segmentation model. This multimodal spatiotemporal representation learning framework enhances collaborative modeling of local nodule structure and motion patterns. Evaluated via cross-validation, STACT-Time achieves a precision of 0.91 and an F1 score of 0.89, outperforming existing methods. It maintains high sensitivity while significantly improving specificity, thereby reducing the rate of unnecessary fine-needle aspiration (FNA) procedures.

Technology Category

Application Category

📝 Abstract
Thyroid cancer is among the most common cancers in the United States. Thyroid nodules are frequently detected through ultrasound (US) imaging, and some require further evaluation via fine-needle aspiration (FNA) biopsy. Despite its effectiveness, FNA often leads to unnecessary biopsies of benign nodules, causing patient discomfort and anxiety. To address this, the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) has been developed to reduce benign biopsies. However, such systems are limited by interobserver variability. Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips, which contain dynamic global information and surrounding structural changes across various views. In this work, we propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model, a novel representation learning framework that integrates imaging features from US cine clips with features from segmentation masks automatically generated by a pretrained model. By leveraging self-attention and cross-attention mechanisms, our model captures the rich temporal and spatial context of US cine clips while enhancing feature representation through segmentation-guided learning. Our model improves malignancy prediction compared to state-of-the-art models, achieving a cross-validation precision of 0.91 (plus or minus 0.02) and an F1 score of 0.89 (plus or minus 0.02). By reducing unnecessary biopsies of benign nodules while maintaining high sensitivity for malignancy detection, our model has the potential to enhance clinical decision-making and improve patient outcomes.
Problem

Research questions and friction points this paper is trying to address.

Reducing unnecessary biopsies of benign thyroid nodules
Improving risk stratification with spatio-temporal US cine clips
Enhancing malignancy prediction accuracy in thyroid ultrasound
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-temporal cross attention for US cine clips
Segmentation-guided learning with pretrained models
Self-attention for dynamic global information capture
🔎 Similar Papers
No similar papers found.
Irsyad Adam
Irsyad Adam
Medical Informatics PhD, UCLA
Knowledge GraphsGNNsMulti-Omics IntegrationMulti-Modal Fusion ModelsModel Explainability
T
Tengyue Zhang
Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles (UCLA), CA, USA; Department of Bioengineering, UCLA, Los Angeles, CA, USA
S
Shrayes Raman
Department of Bioengineering, UCLA, Los Angeles, CA, USA
Z
Zhuyu Qiu
Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles (UCLA), CA, USA
B
Brandon Taraku
Department of Bioengineering, UCLA, Los Angeles, CA, USA
H
Hexiang Feng
Department of Bioengineering, UCLA, Los Angeles, CA, USA
S
Sile Wang
Department of Bioengineering, UCLA, Los Angeles, CA, USA
Ashwath Radhachandran
Ashwath Radhachandran
PhD Student, UCLA
medical informaticsmaking machines learndoing science
Shreeram Athreya
Shreeram Athreya
PhD Student, UCLA
Biomedical image processingComputer visionComputational ImagingMedical Devices
Vedrana Ivezic
Vedrana Ivezic
PhD student, UCLA
Peipei Ping
Peipei Ping
Professor of Physiology UCLA
cardiovascular medicineproteomicsdata science
C
Corey Arnold
Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles (UCLA), CA, USA; Department of Bioengineering, UCLA, Los Angeles, CA, USA; Department of Electrical and Computer Engineering, UCLA, Los Angeles, CA, USA; Department of Radiological Sciences, UCLA, Los Angeles, CA, USA
William Speier
William Speier
UCLA
Machine learningbrain-computer interfacesmedical image analysis