🤖 AI Summary
Weakly supervised segmentation of tiny, highly reflective foci (HRFs) in optical coherence tomography (OCT) images remains challenging due to severe downsampling and coarse localization in existing methods, leading to missed detections, inaccurate localization, and loss of fine details. To address this, we propose: (1) an LRP-guided prompting mechanism for SAM2 to enhance spatial localization precision; (2) a Compact Convolutional Transformer architecture—replacing conventional multiple-instance learning (MIL) frameworks—that integrates positional encoding and strengthens long-range feature interactions; and (3) an iterative weakly supervised inference framework. Trained solely with point-level annotations, our method significantly improves both segmentation accuracy and recall for HRFs. It achieves high-resolution, fine-grained localization and segmentation while maintaining low annotation cost and strong generalizability across diverse OCT datasets.
📝 Abstract
Weakly supervised segmentation has the potential to greatly reduce the annotation effort for training segmentation models for small structures such as hyper-reflective foci (HRF) in optical coherence tomography (OCT). However, most weakly supervised methods either involve a strong downsampling of input images, or only achieve localization at a coarse resolution, both of which are unsatisfactory for small structures. We propose a novel framework that increases the spatial resolution of a traditional attention-based Multiple Instance Learning (MIL) approach by using Layer-wise Relevance Propagation (LRP) to prompt the Segment Anything Model (SAM~2), and increases recall with iterative inference. Moreover, we demonstrate that replacing MIL with a Compact Convolutional Transformer (CCT), which adds a positional encoding, and permits an exchange of information between different regions of the OCT image, leads to a further and substantial increase in segmentation accuracy.