Evaluating Vision Foundation Models for Pixel and Object Classification in Microscopy

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent reliance on handcrafted features and shallow models in pixel- and object-level classification for microscopic imaging, which limits the potential of deep learning. It presents the first systematic evaluation of multiple vision foundation models—including SAM, SAM2, DINOv3, μSAM, and PathoSAM—combined with shallow classifiers and attention probes for semantic segmentation and object classification across five diverse microscopic datasets. The results demonstrate that this approach significantly outperforms conventional handcrafted-feature methods, confirming the efficacy of vision foundation models in microscopic image analysis. Furthermore, the work establishes the first benchmark in this domain and outlines a practical pathway toward deployment, offering a generalizable and efficient solution for advancing computational pathology and related fields.

Technology Category

Application Category

📝 Abstract
Deep learning underlies most modern approaches and tools in computer vision, including biomedical imaging. However, for interactive semantic segmentation (often called pixel classification in this context) and interactive object-level classification (object classification), feature-based shallow learning remains widely used. This is due to the diversity of data in this domain, the lack of large pretraining datasets, and the need for computational and label efficiency. In contrast, state-of-the-art tools for many other vision tasks in microscopy - most notably cellular instance segmentation - already rely on deep learning and have recently benefited substantially from vision foundation models (VFMs), particularly SAM. Here, we investigate whether VFMs can also improve pixel and object classification compared to current approaches. To this end, we evaluate several VFMs, including general-purpose models (SAM, SAM2, DINOv3) and domain-specific ones ($μ$SAM, PathoSAM), in combination with shallow learning and attentive probing on five diverse and challenging datasets. Our results demonstrate consistent improvements over hand-crafted features and provide a clear pathway toward practical improvements. Furthermore, our study establishes a benchmark for VFMs in microscopy and informs future developments in this area.
Problem

Research questions and friction points this paper is trying to address.

Vision Foundation Models
Pixel Classification
Object Classification
Microscopy
Semantic Segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Foundation Models
Microscopy
Pixel Classification
Object Classification
Benchmark
🔎 Similar Papers
No similar papers found.
C
Carolin Teuber
Georg-August-University Göttingen, Institute of Computer Science
Anwai Archit
Anwai Archit
PhD Candidate, University of Göttingen
Biomedical Image AnalysisMachine LearningComputer Vision
T
Tobias Boothe
Department of Tissue Dynamics and Regeneration, Max Planck Institute for Multidisciplinary Sciences, Göttingen
P
Peter Ditte
Department of Tissue Dynamics and Regeneration, Max Planck Institute for Multidisciplinary Sciences, Göttingen
J
Jochen Rink
Department of Tissue Dynamics and Regeneration, Max Planck Institute for Multidisciplinary Sciences, Göttingen; Georg-August-University Göttingen, Faculty of Biology and Psychology
Constantin Pape
Constantin Pape
Juniorprofessor, University Goettingen
ConnectomicsMachine LearningBio Image AnalysisComputer VisionComputational Biology