π€ AI Summary
This work addresses the challenge of few-shot, multi-object segmentation in X-ray angiography videos by proposing a novel few-shot video object segmentation method. The approach introduces a direction-aware local matching strategy to constrain the search space and integrates supervised spatiotemporal contrastive learning to enhance inter-frame feature consistency. Additionally, it employs a non-parametric dynamic local sampling mechanism that avoids reliance on CUDA-specific operators or trainable parameters. The study contributes the first publicly available benchmark dataset for multi-object segmentation in X-ray angiography, termed MOSXAV, and demonstrates significant performance gains over existing methods on CADICA, XACV, and MOSXAV. The proposed method achieves superior segmentation accuracy and generalization capability on both seen and unseen object categories.
π Abstract
We introduce a novel FSVOS model that employs a local matching strategy to restrict the search space to the most relevant neighboring pixels. Rather than relying on inefficient standard im2col-like implementations (e.g., spatial convolutions, depthwise convolutions and feature-shifting mechanisms) or hardware-specific CUDA kernels (e.g., deformable and neighborhood attention), which often suffer from limited portability across non-CUDA devices, we reorganize the local sampling process through a direction-based sampling perspective. Specifically, we implement a non-parametric sampling mechanism that enables dynamically varying sampling regions. This approach provides the flexibility to adapt to diverse spatial structures without the computational costs of parametric layers and the need for model retraining. To further enhance feature coherence across frames, we design a supervised spatio-temporal contrastive learning scheme that enforces consistency in feature representations. In addition, we introduce a publicly available benchmark dataset for multi-object segmentation in X-ray angiography videos (MOSXAV), featuring detailed, manually labeled segmentation ground truth. Extensive experiments on the CADICA, XACV, and MOSXAV datasets show that our proposed FSVOS method outperforms current state-of-the-art video segmentation methods in terms of segmentation accuracy and generalization capability (i.e., seen and unseen categories). This work offers enhanced flexibility and potential for a wide range of clinical applications.