Foundation X: Integrating Classification, Localization, and Segmentation through Lock-Release Pretraining Strategy for Chest X-ray Analysis

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical imaging multi-task modeling faces challenges of high annotation heterogeneity and scarcity of high-quality supervision signals. This paper introduces a chest X-ray–specific multi-task foundation model that unifies disease classification, lesion localization, and organ segmentation, leveraging expert annotations from 11 heterogeneous public datasets. We propose a novel Lock-Release pre-training strategy, integrating student–teacher distillation within a multi-task cyclic collaborative learning framework to balance knowledge generalization and task-specific adaptation. Our method effectively mitigates cross-task annotation inconsistency and significantly improves localization and segmentation accuracy across datasets and tasks—achieving an average +3.2% Dice score over state-of-the-art methods. The code and pre-trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Developing robust and versatile deep-learning models is essential for enhancing diagnostic accuracy and guiding clinical interventions in medical imaging, but it requires a large amount of annotated data. The advancement of deep learning has facilitated the creation of numerous medical datasets with diverse expert-level annotations. Aggregating these datasets can maximize data utilization and address the inadequacy of labeled data. However, the heterogeneity of expert-level annotations across tasks such as classification, localization, and segmentation presents a significant challenge for learning from these datasets. To this end, we introduce nFoundation X, an end-to-end framework that utilizes diverse expert-level annotations from numerous public datasets to train a foundation model capable of multiple tasks including classification, localization, and segmentation. To address the challenges of annotation and task heterogeneity, we propose a Lock-Release pretraining strategy to enhance the cyclic learning from multiple datasets, combined with the student-teacher learning paradigm, ensuring the model retains general knowledge for all tasks while preventing overfitting to any single task. To demonstrate the effectiveness of Foundation X, we trained a model using 11 chest X-ray datasets, covering annotations for classification, localization, and segmentation tasks. Our experimental results show that Foundation X achieves notable performance gains through extensive annotation utilization, excels in cross-dataset and cross-task learning, and further enhances performance in organ localization and segmentation tasks. All code and pretrained models are publicly accessible at https://github.com/jlianglab/Foundation_X.
Problem

Research questions and friction points this paper is trying to address.

Integrates classification, localization, segmentation for chest X-ray analysis.
Addresses annotation heterogeneity across multiple medical datasets.
Enhances diagnostic accuracy with Lock-Release pretraining strategy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lock-Release pretraining strategy for multi-task learning
Student-teacher paradigm to prevent overfitting
Integration of diverse expert-level annotations
🔎 Similar Papers
No similar papers found.