๐ค AI Summary
To address poor generalization in few-shot fine-tuning of large pretrained models and the prohibitively high computational and memory overhead of Sharpness-Aware Minimization (SAM), this paper proposes Bi-LoRA, a bidirectional low-rank adaptation framework. Its core innovation is a dual-LoRA module: a primary module performs standard gradient descent, while an auxiliary module explicitly models adversarial perturbations to estimate loss surface sharpness; both modules feature parameter decoupling and joint optimization, eliminating SAMโs costly two-pass forward/backward computations. This design retains LoRAโs parameter efficiency while enabling, for the first time, scalable and efficient sharpness-aware fine-tuning. Experiments across multiple tasks and architectures demonstrate that Bi-LoRA significantly improves generalization over baselines, with memory and computational overhead nearly matching standard LoRAโand substantially lower than SAM.
๐ Abstract
Fine-tuning large-scale pre-trained models with limited data presents significant challenges for generalization. While Sharpness-Aware Minimization (SAM) has proven effective in improving generalization by seeking flat minima, its substantial extra memory and computation overhead make it impractical for large models. Integrating SAM with parameter-efficient fine-tuning methods like Low-Rank Adaptation (LoRA) is a promising direction. However, we find that directly applying SAM to LoRA parameters limits the sharpness optimization to a restricted subspace, hindering its effectiveness. To address this limitation, we propose Bi-directional Low-Rank Adaptation (Bi-LoRA), which introduces an auxiliary LoRA module to model SAM's adversarial weight perturbations. It decouples SAM's weight perturbations from LoRA optimization: the primary LoRA module adapts to specific tasks via standard gradient descent, while the auxiliary module captures the sharpness of the loss landscape through gradient ascent. Such dual-module design enables Bi-LoRA to capture broader sharpness for achieving flatter minima while remaining memory-efficient. Another important benefit is that the dual design allows for simultaneous optimization and perturbation, eliminating SAM's doubled training costs. Extensive experiments across diverse tasks and architectures demonstrate Bi-LoRA's efficiency and effectiveness in enhancing generalization.