๐ค AI Summary
Visual foundation models (VFMs) suffer from feature redundancy during fine-tuning, hindering adaptation to downstream tasks. Method: We propose a parameter-free fine-tuning paradigm based on fine-grained channel selection and reuseโrequiring no parameter updates. Our approach leverages output-difference-driven channel filtering to suppress redundancy while enhancing discriminative features; it incorporates an inference-oriented channel selection algorithm, feature remapping, and reuse strategy, and integrates seamlessly into a plug-and-play framework compatible with adapter methods (e.g., LoRA). Contribution/Results: Evaluated across multiple cross-domain and in-domain benchmarks, our method achieves significant performance gains without any parameter updates, while substantially reducing GPU memory consumption and computational overhead.
๐ Abstract
Vision foundation models (VFMs) are large pre-trained models that form the backbone of various vision tasks. Fine-tuning VFMs can further unlock their potential for downstream tasks or scenarios. However, VFMs often contain significant feature redundancy, which may limit their adaptability to new tasks. In this paper, we investigate the redundancies in the segment anything model (SAM) and then propose a parameter-free fine-tuning method to address this issue. Unlike traditional fine-tuning methods that adjust parameters, our method emphasizes selecting, reusing, and enhancing pre-trained features, offering a new perspective on model fine-tuning. Specifically, we introduce a channel selection algorithm based on the model's output difference to identify redundant and effective channels. By selectively replacing the redundant channels with more effective ones, we filter out less useful features and reuse the more relevant features to downstream tasks, thereby enhancing the task-specific feature representation. Experiments on both out-of-domain and in-domain datasets demonstrate the efficiency and effectiveness of our method. Notably, our approach can seamlessly integrate with existing fine-tuning strategies (e.g., LoRA, Adapter), further boosting the performance of already fine-tuned models. Moreover, since our channel selection involves only model inference, our method significantly reduces computational and GPU memory overhead.