🤖 AI Summary
Addressing challenges in organ and lesion segmentation and tracking in long ultrasound videos—including low contrast, strong noise, ambiguous boundaries, and small-target disappearance—this paper proposes a memory-augmented wavelet convolutional network. Methodologically, it introduces a novel cascaded wavelet compression mechanism coupled with a cross-frame cross-attention memory bank, integrated with a high-frequency-aware adaptive fusion module to enable multi-scale frequency-domain feature modeling and efficient temporal information utilization. Evaluated on four ultrasound video datasets, the method significantly outperforms state-of-the-art approaches: Dice score for small thyroid nodules improves by 23.4%, and boundary localization error decreases by 18.7%. This work pioneers the deep integration of wavelet analysis, memory mechanisms, and high-frequency adaptive filtering, establishing a new paradigm for high-precision segmentation and tracking in medical ultrasound video analysis.
📝 Abstract
Medical ultrasound videos are widely used for medical inspections, disease diagnosis and surgical planning. High-fidelity lesion area and target organ segmentation constitutes a key component of the computer-assisted surgery workflow. The low contrast levels and noisy backgrounds of ultrasound videos cause missegmentation of organ boundary, which may lead to small object losses and increase boundary segmentation errors. Object tracking in long videos also remains a significant research challenge. To overcome these challenges, we propose a memory bank-based wavelet filtering and fusion network, which adopts an encoder-decoder structure to effectively extract fine-grained detailed spatial features and integrate high-frequency (HF) information. Specifically, memory-based wavelet convolution is presented to simultaneously capture category, detailed information and utilize adjacent information in the encoder. Cascaded wavelet compression is used to fuse multiscale frequency-domain features and expand the receptive field within each convolutional layer. A long short-term memory bank using cross-attention and memory compression mechanisms is designed to track objects in long video. To fully utilize the boundary-sensitive HF details of feature maps, an HF-aware feature fusion module is designed via adaptive wavelet filters in the decoder. In extensive benchmark tests conducted on four ultrasound video datasets (two thyroid nodule, the thyroid gland, the heart datasets) compared with the state-of-the-art methods, our method demonstrates marked improvements in segmentation metrics. In particular, our method can more accurately segment small thyroid nodules, demonstrating its effectiveness for cases involving small ultrasound objects in long video. The code is available at https://github.com/XiAooZ/MWNet.