🤖 AI Summary
To address the limited expressivity and rigid hypothesis space of Low-Rank Adaptation (LoRA), this paper proposes DCT-LoRA, a frequency-domain parameter-efficient fine-tuning method. DCT-LoRA is the first to introduce a learnable positional selection mechanism into the inverse Discrete Cosine Transform (iDCT) domain, enabling dynamic identification and sparse optimization of the most informative frequency components. We theoretically prove that its frequency-domain approximation error is strictly smaller than that of low-rank decomposition. Furthermore, we design a finite-difference-based gradient estimation mechanism to achieve adaptive frequency component selection. Extensive experiments demonstrate that DCT-LoRA significantly improves parameter efficiency across multilingual understanding and vision-language fine-tuning tasks, consistently outperforming LoRA and other baselines under identical computational budgets.
📝 Abstract
Low-rank adaptation (LoRA) has become a prevalent method for adapting pre-trained large language models to downstream tasks. However, the simple low-rank decomposition form may constrain the hypothesis space. To address this limitation, we introduce Location-aware Cosine Adaptation (LoCA), a novel frequency-domain parameter-efficient fine-tuning method based on inverse Discrete Cosine Transform (iDCT) with selective locations of learnable components. We begin with a comprehensive theoretical comparison between frequency-domain and low-rank decompositions for fine-tuning pre-trained large models. Our analysis reveals that frequency-domain approximation with carefully selected frequency components can surpass the expressivity of traditional low-rank-based methods. Furthermore, we demonstrate that iDCT offers a more efficient implementation compared to inverse Discrete Fourier Transform (iDFT), allowing for better selection and tuning of frequency components while maintaining equivalent expressivity to the optimal iDFT-based adaptation. By employing finite-difference approximation to estimate gradients for discrete locations of learnable coefficients on the DCT spectrum, LoCA dynamically selects the most informative frequency components during training. Experiments on diverse language and vision fine-tuning tasks demonstrate that LoCA offers enhanced parameter efficiency while maintains computational feasibility comparable to low-rank-based methods.