🤖 AI Summary
To address the excessive computational and parameter overhead in fine-tuning large convolutional models, this paper proposes a parameter-efficient fine-tuning method based on filter subspaces. The core innovation is the first introduction of a recursive atomic decomposition mechanism for convolutional filters, constructing an extensible low-dimensional subspace. Instead of optimizing full filters, only a small set of learnable atomic basis filters is updated, while channel-wise combination coefficients are kept fixed—thereby preserving critical structural invariance knowledge from pretraining. This structured atomic representation shifts optimization from entire filters to compact, interpretable components, reducing trainable parameters by over three orders of magnitude (<0.1%) without sacrificing expressiveness. Extensive experiments demonstrate that our method consistently outperforms mainstream approaches—including LoRA and Adapter—across both discriminative and generative tasks, achieving performance on par with full-parameter fine-tuning.
📝 Abstract
Efficient fine-tuning methods are critical to address the high computational and parameter complexity while adapting large pre-trained models to downstream tasks. Our study is inspired by prior research that represents each convolution filter as a linear combination of a small set of filter subspace elements, referred to as filter atoms. In this paper, we propose to fine-tune pre-trained models by adjusting only filter atoms, which are responsible for spatial-only convolution, while preserving spatially-invariant channel combination knowledge in atom coefficients. In this way, we bring a new filter subspace view for model tuning. Furthermore, each filter atom can be recursively decomposed as a combination of another set of atoms, which naturally expands the number of tunable parameters in the filter subspace. By only adapting filter atoms constructed by a small number of parameters, while maintaining the rest of model parameters constant, the proposed approach is highly parameter-efficient. It effectively preserves the capabilities of pre-trained models and prevents overfitting to downstream tasks. Extensive experiments show that such a simple scheme surpasses previous tuning baselines for both discriminate and generative tasks.