🤖 AI Summary
This work addresses the limitations of existing Low-Rank Adaptation (LoRA) methods, which typically employ a uniform rank across all layers despite significant inter-layer differences in importance and rely on instantaneous gradient-based scoring that is highly susceptible to noise, leading to unstable rank allocation. To overcome these issues, the paper proposes IGU-LoRA, the first approach to integrate Integrated Gradients into LoRA for assessing layer-wise parameter sensitivity. Coupled with an uncertainty-aware mechanism based on exponential moving average and bias tracking, IGU-LoRA dynamically assigns adaptation ranks per layer. Under identical parameter budgets, the method consistently outperforms state-of-the-art parameter-efficient fine-tuning (PEFT) approaches across diverse model architectures and downstream tasks, achieving superior fine-tuning performance and robustness.
📝 Abstract
As large language models (LLMs) scale to billions of parameters, full-parameter fine-tuning becomes compute- and memory-prohibitive. Parameter-efficient fine-tuning (PEFT) mitigates this issue by updating only a small set of task-specific parameters while keeping the base model frozen. Among PEFT approaches, low-rank adaptation (LoRA) is widely adopted; however, it enforces a uniform rank across layers despite substantial variation in layer importance, motivating {layerwise} rank allocation. Recent adaptive-rank variants (e.g., AdaLoRA) allocate ranks based on importance scores, yet typically rely on instantaneous gradients that capture only local sensitivity, overlooking non-local, pathwise effects within the same layer, which yields unstable and biased scores. To address this limitation, we introduce IGU-LoRA, an adaptive-rank LoRA that (i) computes within-layer Integrated Gradients (IG) sensitivities and aggregates them into a layer-level score for rank allocation, and (ii) applies an uncertainty-aware scheme using exponential moving averages with deviation tracking to suppress noisy updates and calibrate rank selection. Theoretically, we prove an upper bound on the composite trapezoidal rule approximation error for parameter-space IG under a pathwise Hessian-Lipschitz condition, which informs the quadrature budget. Across diverse tasks and architectures, IGU-LoRA consistently outperforms strong PEFT baselines at matched parameter budgets, improving downstream accuracy and robustness. Ablations confirm the contributions of pathwise within-layer sensitivity estimates and uncertainty-aware selection to effective rank allocation. Our code is publicly available at https://github.com/withyou12/igulora.git