U-DFA: A Unified DINOv2-Unet with Dual Fusion Attention for Multi-Dataset Medical Segmentation

πŸ“… 2025-10-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limitations of CNNs’ restricted local receptive fields, insufficient CNN-Transformer integration, and the domain gap and high computational cost of vision-language models in medical image segmentation, this paper proposes DINOv2-Unetβ€”a unified architecture that freezes the DINOv2 backbone and introduces a lightweight Local-Global Fusion Adapter (LGFA). The LGFA enables efficient local-global feature collaboration via multi-stage spatial feature injection and a dual-fusion attention mechanism. Additionally, a Spatial Pattern Adapter (SPA) is incorporated to enhance cross-modal representation consistency. Evaluated on Synapse and ACDC benchmarks, the method achieves state-of-the-art performance with only 33% trainable parameters, significantly improving segmentation accuracy, computational efficiency, and cross-modal generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Accurate medical image segmentation plays a crucial role in overall diagnosis and is one of the most essential tasks in the diagnostic pipeline. CNN-based models, despite their extensive use, suffer from a local receptive field and fail to capture the global context. A common approach that combines CNNs with transformers attempts to bridge this gap but fails to effectively fuse the local and global features. With the recent emergence of VLMs and foundation models, they have been adapted for downstream medical imaging tasks; however, they suffer from an inherent domain gap and high computational cost. To this end, we propose U-DFA, a unified DINOv2-Unet encoder-decoder architecture that integrates a novel Local-Global Fusion Adapter (LGFA) to enhance segmentation performance. LGFA modules inject spatial features from a CNN-based Spatial Pattern Adapter (SPA) module into frozen DINOv2 blocks at multiple stages, enabling effective fusion of high-level semantic and spatial features. Our method achieves state-of-the-art performance on the Synapse and ACDC datasets with only 33% of the trainable model parameters. These results demonstrate that U-DFA is a robust and scalable framework for medical image segmentation across multiple modalities.
Problem

Research questions and friction points this paper is trying to address.

Bridging local and global feature gaps in medical image segmentation
Overcoming domain gaps and high computational costs in foundation models
Enhancing multi-dataset medical segmentation with efficient parameter usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified DINOv2-Unet architecture with dual fusion attention
Local-Global Fusion Adapter injects spatial features into DINOv2
Fuses CNN spatial patterns with DINOv2 semantic features
πŸ”Ž Similar Papers
No similar papers found.