π€ AI Summary
To address the limitations of CNNsβ restricted local receptive fields, insufficient CNN-Transformer integration, and the domain gap and high computational cost of vision-language models in medical image segmentation, this paper proposes DINOv2-Unetβa unified architecture that freezes the DINOv2 backbone and introduces a lightweight Local-Global Fusion Adapter (LGFA). The LGFA enables efficient local-global feature collaboration via multi-stage spatial feature injection and a dual-fusion attention mechanism. Additionally, a Spatial Pattern Adapter (SPA) is incorporated to enhance cross-modal representation consistency. Evaluated on Synapse and ACDC benchmarks, the method achieves state-of-the-art performance with only 33% trainable parameters, significantly improving segmentation accuracy, computational efficiency, and cross-modal generalization capability.
π Abstract
Accurate medical image segmentation plays a crucial role in overall diagnosis and is one of the most essential tasks in the diagnostic pipeline. CNN-based models, despite their extensive use, suffer from a local receptive field and fail to capture the global context. A common approach that combines CNNs with transformers attempts to bridge this gap but fails to effectively fuse the local and global features. With the recent emergence of VLMs and foundation models, they have been adapted for downstream medical imaging tasks; however, they suffer from an inherent domain gap and high computational cost. To this end, we propose U-DFA, a unified DINOv2-Unet encoder-decoder architecture that integrates a novel Local-Global Fusion Adapter (LGFA) to enhance segmentation performance. LGFA modules inject spatial features from a CNN-based Spatial Pattern Adapter (SPA) module into frozen DINOv2 blocks at multiple stages, enabling effective fusion of high-level semantic and spatial features. Our method achieves state-of-the-art performance on the Synapse and ACDC datasets with only 33% of the trainable model parameters. These results demonstrate that U-DFA is a robust and scalable framework for medical image segmentation across multiple modalities.