๐ค AI Summary
Existing foundation models in computational pathology struggle to balance performance, robustness, and computational efficiency, limiting their clinical utility. To address this challenge, this work introduces the Atlas 2 series of pathology vision foundation models, which undergoes ultra-large-scale multicenter pretraining on 5.5 million whole-slide images from Charitรฉ, LMU Munich, and Mayo Clinicโthe largest such dataset to date. Built upon a Vision Transformer architecture, Atlas 2 is co-optimized for model accuracy, robustness across diverse clinical settings, and deployment efficiency. The model achieves state-of-the-art performance across 80 public benchmarks, significantly outperforming current methods and marking a substantial advance in key dimensions critical for real-world clinical applicability.
๐ Abstract
Pathology foundation models substantially advanced the possibilities in computational pathology -- yet tradeoffs in terms of performance, robustness, and computational requirements remained, which limited their clinical deployment. In this report, we present Atlas 2, Atlas 2-B, and Atlas 2-S, three pathology vision foundation models which bridge these shortcomings by showing state-of-the-art performance in prediction performance, robustness, and resource efficiency in a comprehensive evaluation across eighty public benchmarks. Our models were trained on the largest pathology foundation model dataset to date comprising 5.5 million histopathology whole slide images, collected from three medical institutions Charit\'e - Universt\"atsmedizin Berlin, LMU Munich, and Mayo Clinic.