G2L:From Giga-Scale to Cancer-Specific Large-Scale Pathology Foundation Models via Knowledge Distillation

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose foundation models for computational pathology incur prohibitive computational costs and hinder clinical deployment. Method: This paper proposes the Global-to-Local (G2L) knowledge distillation framework, the first to efficiently transfer knowledge from a billion-parameter generalist pathology foundation model to a lightweight student model—only 15% the size of the teacher—using merely 1,000 task-specific cancer histopathology slides. G2L integrates multi-source histopathological image data to enhance both task-specific performance and cross-institutional robustness. Contributions/Results: The distilled model surpasses same-scale state-of-the-art methods across multiple benchmarks; notably, it outperforms the trillion-parameter teacher on several metrics. It achieves a 3.2× speedup in inference latency and reduces GPU memory consumption by 76%, establishing a new paradigm for high-performance, cost-efficient deployment of pathology AI in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
Recent studies in pathology foundation models have shown that scaling training data, diversifying cancer types, and increasing model size consistently improve their performance. However, giga-scale foundation models, which are trained on hundreds of thousands of slides covering tens of cancer types and contain billions of parameters, pose significant challenges for practical use due to their tremendous computational costs in both development and deployment. In this work, we present a novel strategy, named the G2L framework, to increase the performance of large-scale foundation models, which consist of only $15%$ of the parameters of giga-scale models, to a comparable performance level of giga-scale models in cancer-specific tasks. Our approach applies knowledge distillation, transferring the capabilities of a giga-scale model to a large-scale model, using just 1K pathology slides of a target cancer (e.g., breast, prostate, etc.). The resulting distilled model not only outperformed state-of-the-art models of the same size (i.e., large-scale) across several benchmarks but also, interestingly, surpassed the giga-scale teacher and huge-scale models in some benchmarks. In addition, the distilled model exhibited a higher robustness index, indicating improved resilience to image variations originating from multiple institutions. These findings suggest that the proposed distillation approach for a large-scale model is a data- and parameter-efficient way to achieve giga-scale-level performance for cancer-specific applications without prohibitive computational burden.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs of giga-scale pathology models
Transferring giga-scale model performance to smaller models
Achieving cancer-specific accuracy with minimal training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation transfers giga-scale model capabilities
Uses only 1K slides for cancer-specific model training
Achieves giga-scale performance with 15% parameters
Y
Yesung Cho
RadiSen Co. Ltd., Seoul, South Korea
Sungmin Lee
Sungmin Lee
AIX, SK Telecom
Machine LearningComputer Vision
G
Geongyu Lee
RadiSen Co. Ltd., Seoul, South Korea
M
Minkyung Lee
RadiSen Co. Ltd., Seoul, South Korea
J
Jongbae Park
Kyunghee University, Seoul, South Korea
D
Dongmyung Shin
RadiSen Co. Ltd., Seoul, South Korea