Spatiotemporal Learning with Context-aware Video Tubelets for Ultrasound Video Analysis

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the loss of global spatial context in video-assisted pathological detection caused by local ROI cropping in conventional tubelet methods, this paper proposes a lightweight context-aware video tubelet learning framework. Our method explicitly encodes tubelet position, size, and confidence as spatial contextual inputs and introduces an ROI-aligned pretrained feature transfer mechanism to expand the receptive field while preserving global spatial consistency. Furthermore, we integrate tubelet-level spatiotemporal modeling with a context-embedded classifier, yielding an efficient network with only 0.4M parameters. Evaluated on a large-scale ultrasound video dataset comprising 14,804 clips from 828 patients, the framework achieves statistically significant performance improvements over existing tubelet-based approaches under five-fold cross-validation. Its computational efficiency enables real-time clinical deployment.

Technology Category

Application Category

📝 Abstract
Computer-aided pathology detection algorithms for video-based imaging modalities must accurately interpret complex spatiotemporal information by integrating findings across multiple frames. Current state-of-the-art methods operate by classifying on video sub-volumes (tubelets), but they often lose global spatial context by focusing only on local regions within detection ROIs. Here we propose a lightweight framework for tubelet-based object detection and video classification that preserves both global spatial context and fine spatiotemporal features. To address the loss of global context, we embed tubelet location, size, and confidence as inputs to the classifier. Additionally, we use ROI-aligned feature maps from a pre-trained detection model, leveraging learned feature representations to increase the receptive field and reduce computational complexity. Our method is efficient, with the spatiotemporal tubelet classifier comprising only 0.4M parameters. We apply our approach to detect and classify lung consolidation and pleural effusion in ultrasound videos. Five-fold cross-validation on 14,804 videos from 828 patients shows our method outperforms previous tubelet-based approaches and is suited for real-time workflows.
Problem

Research questions and friction points this paper is trying to address.

Interpreting spatiotemporal data in ultrasound videos accurately
Preserving global spatial context in tubelet-based video analysis
Detecting lung consolidation and pleural effusion efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight framework preserves global spatial context
Embeds tubelet location, size, and confidence
Uses ROI-aligned feature maps from pre-trained model
🔎 Similar Papers
No similar papers found.