Mind the Gap: Continuous Magnification Sampling for Pathology Foundation Models

📅 2026-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unclear impact of magnification-level variation and training sampling strategies on the performance of existing histopathology foundation models. It reveals that discrete uniform sampling leads to degraded performance at intermediate magnifications—a previously unreported deficiency. Framing magnification sampling as a multi-source domain adaptation problem, the work proposes a continuous magnification sampling strategy to eliminate coverage gaps across scales and derives a theoretically optimal sampling distribution. The authors introduce two new benchmark datasets (TCGA-MS and BRACS-MS) and cross-scale evaluation metrics to rigorously assess model robustness. Experimental results demonstrate that the proposed approach improves balanced accuracy at intermediate magnifications by up to 4 percentage points, establishing magnification level as a critical factor influencing model robustness in computational pathology.

Technology Category

Application Category

📝 Abstract
In histopathology, pathologists examine both tissue architecture at low magnification and fine-grained morphology at high magnification. Yet, the performance of pathology foundation models across magnifications and the effect of magnification sampling during training remain poorly understood. We model magnification sampling as a multi-source domain adaptation problem and develop a simple theoretical framework that reveals systematic trade-offs between sampling strategies. We show that the widely used discrete uniform sampling of magnifications (0.25, 0.5, 1.0, 2.0 mpp) leads to degradation at intermediate magnifications. We introduce continuous magnification sampling, which removes gaps in magnification coverage while preserving performance at standard scales. Further, we derive sampling distributions that optimize representation quality across magnification scales. To evaluate these strategies, we introduce two new benchmarks (TCGA-MS, BRACS-MS) with appropriate metrics. Our experiments show that continuous sampling substantially improves over discrete sampling at intermediate magnifications, with gains of up to 4 percentage points in balanced classification accuracy, and that optimized distributions can further improve performance. Finally, we evaluate current histopathology foundation models, finding that magnification is a primary driver of performance variation across models. Our work paves the way towards future pathology foundation models that perform reliably across magnifications.
Problem

Research questions and friction points this paper is trying to address.

magnification sampling
pathology foundation models
multi-source domain adaptation
histopathology
intermediate magnifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous magnification sampling
multi-source domain adaptation
pathology foundation models
magnification coverage
optimized sampling distribution
🔎 Similar Papers
No similar papers found.
A
Alexander Möllers
Berlin Institute for the Foundations of Learning and Data (BIFOLD); Machine Learning Group, Technische Universität Berlin; Aignostics
Julius Hense
Julius Hense
PhD Student at BIFOLD, TU Berlin
Computational PathologyExplainable AIMultimodal LearningRepresentation Learning
F
Florian Schulz
Berlin Institute for the Foundations of Learning and Data (BIFOLD); Machine Learning Group, Technische Universität Berlin
Timo Milbich
Timo Milbich
Aignostics GmbH
Computer VisionRepresentation LearningDigital PathologySelf-supervised Learning
Maximilian Alber
Maximilian Alber
TU Berlin
machine learning
Lukas Ruff
Lukas Ruff
Aignostics
Machine LearningDeep LearningTrustworthy MLAnomaly DetectionDigital Pathology