🤖 AI Summary
Current pretraining of digital pathology vision foundation models relies on expert-driven, whole-slide image (WSI)-level data curation, overlooking fine-grained tissue heterogeneity and incurring high annotation costs.
Method: We propose a tile-level unsupervised automatic data curation method. Its core is the first large-scale hierarchical clustering-based sampling framework in tile embedding space, which uncovers the intrinsic trade-off between data scale and representation balance. We further design embedding-space balanced sampling and batch-aware optimization strategies tailored to foundation model pretraining.
Contribution/Results: Our method achieves significant performance gains across multiple clinically relevant downstream tasks—including tumor classification, subtype prediction, and survival prognosis—demonstrating substantial improvement in pathological representation quality. It establishes a new data-efficient pretraining paradigm for pathology vision foundation models, reducing reliance on manual curation while enhancing generalization and robustness.
📝 Abstract
Vision foundation models (FMs) are accelerating the development of digital pathology algorithms and transforming biomedical research. These models learn, in a self-supervised manner, to represent histological features in highly heterogeneous tiles extracted from whole-slide images (WSIs) of real-world patient samples. The performance of these FMs is significantly influenced by the size, diversity, and balance of the pre-training data. However, data selection has been primarily guided by expert knowledge at the WSI level, focusing on factors such as disease classification and tissue types, while largely overlooking the granular details available at the tile level. In this paper, we investigate the potential of unsupervised automatic data curation at the tile-level, taking into account 350 million tiles. Specifically, we apply hierarchical clustering trees to pre-extracted tile embeddings, allowing us to sample balanced datasets uniformly across the embedding space of the pretrained FM. We further identify these datasets are subject to a trade-off between size and balance, potentially compromising the quality of representations learned by FMs, and propose tailored batch sampling strategies to mitigate this effect. We demonstrate the effectiveness of our method through improved performance on a diverse range of clinically relevant downstream tasks.