🤖 AI Summary
This study addresses the persistent reliance on handcrafted features and shallow models in pixel- and object-level classification for microscopic imaging, which limits the potential of deep learning. It presents the first systematic evaluation of multiple vision foundation models—including SAM, SAM2, DINOv3, μSAM, and PathoSAM—combined with shallow classifiers and attention probes for semantic segmentation and object classification across five diverse microscopic datasets. The results demonstrate that this approach significantly outperforms conventional handcrafted-feature methods, confirming the efficacy of vision foundation models in microscopic image analysis. Furthermore, the work establishes the first benchmark in this domain and outlines a practical pathway toward deployment, offering a generalizable and efficient solution for advancing computational pathology and related fields.
📝 Abstract
Deep learning underlies most modern approaches and tools in computer vision, including biomedical imaging. However, for interactive semantic segmentation (often called pixel classification in this context) and interactive object-level classification (object classification), feature-based shallow learning remains widely used. This is due to the diversity of data in this domain, the lack of large pretraining datasets, and the need for computational and label efficiency. In contrast, state-of-the-art tools for many other vision tasks in microscopy - most notably cellular instance segmentation - already rely on deep learning and have recently benefited substantially from vision foundation models (VFMs), particularly SAM. Here, we investigate whether VFMs can also improve pixel and object classification compared to current approaches. To this end, we evaluate several VFMs, including general-purpose models (SAM, SAM2, DINOv3) and domain-specific ones ($μ$SAM, PathoSAM), in combination with shallow learning and attentive probing on five diverse and challenging datasets. Our results demonstrate consistent improvements over hand-crafted features and provide a clear pathway toward practical improvements. Furthermore, our study establishes a benchmark for VFMs in microscopy and informs future developments in this area.