๐ค AI Summary
This study evaluates whether the Segment Anything Model 3 (SAM3) outperforms SAM2 on eye image segmentation tasks and presents the first investigation into the efficacy of SAM3โs newly introduced text (conceptual) prompting modality. We conduct a systematic comparison of segmentation performance across diverse ocular imaging datasets, encompassing both high-resolution laboratory videos and complex in-the-wild scenes, while also comprehensively assessing the combined use of visual and textual prompts. As the first work to evaluate SAM3โs text prompting capability specifically for eye image segmentation, we additionally release an open-source SAM3 adaptation supporting arbitrary-length video processing. Our experiments demonstrate that SAM3 generally fails to surpass SAM2, which consistently achieves higher accuracy and faster inference speed, establishing SAM2 as the current method of choice for eye image segmentation.
๐ Abstract
Previous work has reported that vision foundation models show promising zero-shot performance in eye image segmentation. Here we examine whether the latest iteration of the Segment Anything Model, SAM3, offers better eye image segmentation performance than SAM2, and explore the performance of its new concept (text) prompting mode. Eye image segmentation performance was evaluated using diverse datasets encompassing both high-resolution high-quality videos from a lab environment and the TEyeD dataset consisting of challenging eye videos acquired in the wild. Results show that in most cases SAM3 with either visual or concept prompts did not perform better than SAM2, for both lab and in-the-wild datasets. Since SAM2 not only performed better but was also faster, we conclude that SAM2 remains the best option for eye image segmentation. We provide our adaptation of SAM3's codebase that allows processing videos of arbitrary duration.