Parameter-Free Fine-tuning via Redundancy Elimination for Vision Foundation Models

๐Ÿ“… 2025-04-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Visual foundation models (VFMs) suffer from feature redundancy during fine-tuning, hindering adaptation to downstream tasks. Method: We propose a parameter-free fine-tuning paradigm based on fine-grained channel selection and reuseโ€”requiring no parameter updates. Our approach leverages output-difference-driven channel filtering to suppress redundancy while enhancing discriminative features; it incorporates an inference-oriented channel selection algorithm, feature remapping, and reuse strategy, and integrates seamlessly into a plug-and-play framework compatible with adapter methods (e.g., LoRA). Contribution/Results: Evaluated across multiple cross-domain and in-domain benchmarks, our method achieves significant performance gains without any parameter updates, while substantially reducing GPU memory consumption and computational overhead.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision foundation models (VFMs) are large pre-trained models that form the backbone of various vision tasks. Fine-tuning VFMs can further unlock their potential for downstream tasks or scenarios. However, VFMs often contain significant feature redundancy, which may limit their adaptability to new tasks. In this paper, we investigate the redundancies in the segment anything model (SAM) and then propose a parameter-free fine-tuning method to address this issue. Unlike traditional fine-tuning methods that adjust parameters, our method emphasizes selecting, reusing, and enhancing pre-trained features, offering a new perspective on model fine-tuning. Specifically, we introduce a channel selection algorithm based on the model's output difference to identify redundant and effective channels. By selectively replacing the redundant channels with more effective ones, we filter out less useful features and reuse the more relevant features to downstream tasks, thereby enhancing the task-specific feature representation. Experiments on both out-of-domain and in-domain datasets demonstrate the efficiency and effectiveness of our method. Notably, our approach can seamlessly integrate with existing fine-tuning strategies (e.g., LoRA, Adapter), further boosting the performance of already fine-tuned models. Moreover, since our channel selection involves only model inference, our method significantly reduces computational and GPU memory overhead.
Problem

Research questions and friction points this paper is trying to address.

Eliminate feature redundancy in Vision Foundation Models
Propose parameter-free fine-tuning via channel selection
Enhance task-specific features without adjusting model parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-free fine-tuning via redundancy elimination
Channel selection algorithm for feature enhancement
Seamless integration with existing fine-tuning strategies
๐Ÿ”Ž Similar Papers
J
Jiahuan Long
Shanghai Jiao Tong University
T
Tingsong Jiang
Chinese Academy of Military Science
W
Wen Yao
Chinese Academy of Military Science
Yizhe Xiong
Yizhe Xiong
Tsinghua University
Transfer LearningComputer VisionLarge Language Models
Z
Zhengqin Xu
Shanghai Jiao Tong University
Shuai Jia
Shuai Jia
Shanghai Jiao Tong University
Computer VisionVisual Object TrackingAdversarial Learning
C
Chao Ma
Shanghai Jiao Tong University