🤖 AI Summary
This work addresses the challenge of deploying large foundation models for chest X-ray (CXR) image segmentation under stringent computational constraints in clinical settings. The authors propose a two-stage fine-tuning framework that first employs AdaLoRA for adaptive low-rank adaptation of the encoder, followed by selective INT8 mixed-precision quantization combined with quantization-aware training (QAT). This approach substantially improves parameter efficiency while preserving structural fidelity in segmentation outputs. Evaluated on a large-scale CXR dataset, the method achieves a Dice score of 95.6% using 16.6× fewer trainable parameters and attains a model compression ratio of 2.24×, with negligible degradation in segmentation accuracy due to quantization. These results demonstrate the practicality and reliability of the proposed framework for resource-constrained medical applications.
📝 Abstract
Chest X-ray (CXR) segmentation is an important step in computer-aided diagnosis, yet deploying large foundation models in clinical settings remains challenging due to computational constraints. We propose AdaLoRA-QAT, a two-stage fine-tuning framework that combines adaptive low-rank encoder adaptation with full quantization-aware training. Adaptive rank allocation improves parameter efficiency, while selective mixed-precision INT8 quantization preserves structural fidelity crucial for clinical reliability. Evaluated across large-scale CXR datasets, AdaLoRA-QAT achieves 95.6% Dice, matching full-precision SAM decoder fine-tuning while reducing trainable parameters by 16.6\times and yielding 2.24\times model compression. A Wilcoxon signed-rank test confirms that quantization does not significantly degrade segmentation accuracy. These results demonstrate that AdaLoRA-QAT effectively balances accuracy, efficiency, and structural trust-worthiness, enabling compact and deployable foundation models for medical image segmentation. Code and pretrained models are available at: https://prantik-pdeb.github.io/adaloraqat.github.io/