🤖 AI Summary
To address the limited generalization of the Segment Anything Model (SAM) on unseen complex tasks, this paper proposes a wavelet-based feature enhancement method. The approach integrates discrete wavelet transform (DWT) into SAM’s adaptation framework—marking the first such incorporation—to explicitly model multi-scale high-frequency features. We design a complex-domain adapter that jointly encodes spatial and frequency information, improving both interpretability and adaptability. Furthermore, we introduce learnable wavelet coefficient fusion alongside a lightweight fine-tuning strategy, ensuring compatibility with both SAM and SAM2 architectures. Evaluated on four low-level vision tasks, our method consistently outperforms existing adaptation approaches. It demonstrates superior robustness, flexibility, and generalization across diverse backbone networks and adapter configurations, while maintaining computational efficiency.
📝 Abstract
The emergence of large foundation models has propelled significant advances in various domains. The Segment Anything Model (SAM), a leading model for image segmentation, exemplifies these advances, outperforming traditional methods. However, such foundation models often suffer from performance degradation when applied to complex tasks for which they are not trained. Existing methods typically employ adapter-based fine-tuning strategies to adapt SAM for tasks and leverage high-frequency features extracted from the Fourier domain. However, Our analysis reveals that these approaches offer limited benefits due to constraints in their feature extraction techniques. To overcome this, we propose extbf{ extit{SAMwave}}, a novel and interpretable approach that utilizes the wavelet transform to extract richer, multi-scale high-frequency features from input data. Extending this, we introduce complex-valued adapters capable of capturing complex-valued spatial-frequency information via complex wavelet transforms. By adaptively integrating these wavelet coefficients, SAMwave enables SAM's encoder to capture information more relevant for dense prediction. Empirical evaluations on four challenging low-level vision tasks demonstrate that SAMwave significantly outperforms existing adaptation methods. This superior performance is consistent across both the SAM and SAM2 backbones and holds for both real and complex-valued adapter variants, highlighting the efficiency, flexibility, and interpretability of our proposed method for adapting segment anything models.