🤖 AI Summary
Medical foundation model pretraining suffers from data inefficiency: scaling data volume alone does not guarantee performance gains, and existing sample selection strategies lack theoretical grounding. To address this, we propose a V-information-based self-supervised paradigm that formally models sample selection as a joint optimization of diversity and difficulty—introducing OptiDEL, a method for discriminative sample generation and selection under limited data. Theoretically, V-information establishes a rigorous information-theoretic foundation for data-efficient learning. Technically, OptiDEL integrates adversarial augmentation with collaborative optimization. Experiments demonstrate that using only 5% of the pretraining data, OptiDEL surpasses the full-data baseline by +6.2% mIoU and achieves an average gain of 4.7% mIoU across eight medical imaging datasets, yielding a 20× improvement in data efficiency.
📝 Abstract
Self-supervised pre-training medical foundation models on large-scale datasets demonstrate exceptional performance. Recent research challenges this common paradigm by introducing data-effective learning approaches, demonstrating that merely increasing pre-training data volume does not necessarily improve model performance. However, current methods still have unclear standards and the underlying theoretical foundation remains unknown. In this paper, as the first attempt to address this limitation, we introduce V-information into self-supervised pre-training of foundation models to provide a theoretical foundation for sample selection. Our derivation confirms that by optimizing V-information, sample selection can be framed as an optimization problem where choosing diverse and challenging samples enhances model performance even under limited training data. Under this guidance, we develop an optimized data-effective learning method (OptiDEL) to optimize V-information in real-world medical domains by generating more diverse and harder samples. We compare the OptiDEL method with state-of-the-art approaches finding that OptiDEL consistently outperforms existing approaches across eight different datasets, with foundation models trained on only 5% of the pre-training data achieving up to 6.2% higher mIoU than those trained on the full dataset. Remarkably, OptiDEL demonstrates an average improvement of 4.7% mIoU over competing methods while using 20x less training data.