🤖 AI Summary
Existing pathology foundation models (PFMs) are predominantly pre-trained on hematoxylin and eosin (H&E)-stained whole-slide images (WSIs), exhibiting limited generalization to specialized stains such as immunohistochemistry (IHC). To address this gap, we propose the first self-supervised vision transformer (ViT) foundation model explicitly designed for specialized staining. Our method introduces a novel self-distillation-based self-supervised learning (SSL) strategy and conducts large-scale patch-level pre-training on over 1.4 million specialized-stain WSI patches—establishing the first multi-stain foundation model. The model supports diverse downstream tasks, including classification, retrieval, and few-shot learning. Experiments demonstrate significant improvements over H&E-pretrained baselines on slide-level hepatocellular carcinoma classification and two public region-of-interest (ROI)-level benchmarks. Moreover, it achieves state-of-the-art performance in few-shot recognition and cross-stain retrieval. The code and pre-trained models are publicly released.
📝 Abstract
Foundation models trained with self-supervised learning (SSL) on large-scale histological images have significantly accelerated the development of computational pathology. These models can serve as backbones for region-of-interest (ROI) image analysis or patch-level feature extractors in whole-slide images (WSIs) based on multiple instance learning (MIL). Existing pathology foundation models (PFMs) are typically pre-trained on Hematoxylin-Eosin (H&E) stained pathology images. However, images with special stains, such as immunohistochemistry, are also frequently used in clinical practice. PFMs pre-trained mainly on H&E-stained images may be limited in clinical applications involving special stains. To address this issue, we propose StainNet, a specialized foundation model for special stains based on the vision transformer (ViT) architecture. StainNet adopts a self-distillation SSL approach and is trained on over 1.4 million patch images cropping from 20,231 publicly available special staining WSIs in the HISTAI database. To evaluate StainNet, we conduct experiments on an in-house slide-level liver malignancy classification task and two public ROI-level datasets to demonstrate its strong ability. We also perform few-ratio learning and retrieval evaluations, and compare StainNet with recently larger PFMs to further highlight its strengths. We have released the StainNet model weights at: https://huggingface.co/JWonderLand/StainNet.