🤖 AI Summary
This work addresses the high cost of selecting self-supervised pre-trained models for medical image segmentation by proposing a fine-tuning-free, topology-driven transferability evaluation framework. It introduces manifold topology into the assessment of medical foundation models, quantifying both global structural isomorphism—via minimum spanning trees and manifold separability—and local boundary topological consistency between features and label manifolds. A task-adaptive fusion mechanism is further designed to rank candidate models effectively. Evaluated on the OpenMind benchmark, the proposed method achieves approximately a 31% improvement in the weighted Kendall metric over existing approaches, significantly enhancing both the efficiency and accuracy of model selection.
📝 Abstract
The advent of large-scale self-supervised learning (SSL) has produced a vast zoo of medical foundation models. However, selecting optimal medical foundation models for specific segmentation tasks remains a computational bottleneck. Existing Transferability Estimation (TE) metrics, primarily designed for classification, rely on global statistical assumptions and fail to capture the topological complexity essential for dense prediction. We propose a novel Topology-Driven Transferability Estimation framework that evaluates manifold tractability rather than statistical overlap. Our approach introduces three components: (1) Global Representation Topology Divergence (GRTD), utilizing Minimum Spanning Trees to quantify feature-label structural isomorphism; (2) Local Boundary-Aware Topological Consistency (LBTC), which assesses manifold separability specifically at critical anatomical boundaries; and (3) Task-Adaptive Fusion, which dynamically integrates global and local metrics based on the semantic cardinality of the target task. Validated on the large-scale OpenMind benchmark across diverse anatomical targets and SSL foundation models, our approach significantly outperforms state-of-the-art baselines by around \textbf{31\%} relative improvement in the weighted Kendall, providing a robust, training-free proxy for efficient model selection without the cost of fine-tuning. The code will be made publicly available upon acceptance.