🤖 AI Summary
Detecting and classifying atypical mitotic figures (AMFs) in histopathological images remains challenging due to their low prevalence, subtle morphological features, and inter-observer variability in annotation. Method: We propose a lightweight, efficient transfer learning framework based on DINOv3-H+ Vision Transformer. It leverages natural-image pretraining from DINOv3 for cross-domain knowledge transfer, employs Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning, and integrates strong data augmentation to mitigate small-sample bias. Contribution/Results: Evaluated on the MIDOG 2025 preliminary screening test set, our method achieves a balanced accuracy of 0.8871—significantly outperforming baseline models. This work represents the first empirical validation of DINOv3’s strong generalization capability for digital pathology AMF recognition. Moreover, it establishes a reproducible, lightweight adaptation paradigm for pretrained foundation models in computational pathology.
📝 Abstract
Atypical mitotic figures (AMFs) are markers of abnormal cell division associated with poor prognosis, yet their detection remains difficult due to low prevalence, subtle morphology, and inter-observer variability. The MIDOG 2025 challenge introduces a benchmark for AMF classification across multiple domains. In this work, we evaluate the recently published DINOv3-H+ vision transformer, pretrained on natural images, which we fine-tuned using low-rank adaptation (LoRA, 650k trainable parameters) and extensive augmentation. Despite the domain gap, DINOv3 transfers effectively to histopathology, achieving a balanced accuracy of 0.8871 on the preliminary test set. These results highlight the robustness of DINOv3 pretraining and show that, when combined with parameter-efficient fine-tuning, it provides a strong baseline for atypical mitosis classification in MIDOG 2025.