Spectral Surgery: Training-Free Refinement of LoRA via Gradient-Guided Singular Value Reweighting

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency in spectral utilization within the low-rank subspace of trained LoRA adapters, where many singular directions are either unhelpful or detrimental to downstream tasks. The authors propose the first training-free post-processing method for LoRA: by performing SVD on the trained LoRA weights and estimating the sensitivity of each singular component via gradients computed on a small calibration set, they reweight the singular values according to their sensitivity while preserving the original singular directions. This approach adjusts only around 1,000 scalar coefficients and yields significant performance gains across four benchmarks on Llama-3.1-8B and Qwen3-8B, achieving up to a 4.4-point improvement on CommonsenseQA and a 2.4-point gain in HumanEval pass@1.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) improves downstream performance by restricting task updates to a low-rank parameter subspace, yet how this limited capacity is allocated within a trained adapter remains unclear. Through a geometric and empirical study across multiple tasks and backbones, we find that trained LoRA updates often exhibit an inefficient spectrum: task effects concentrate in a small subset of singular directions, while many remaining components are neutral or detrimental, motivating post-hoc refinement within the learned subspace. We propose Spectral Surgery, a training-free refinement that decomposes a LoRA update with SVD, estimates per-component sensitivity using gradients on a small calibration set, and reweights singular values under a magnitude constraint while keeping the learned directions fixed. Across Llama-3.1-8B and Qwen3-8B on four benchmarks, Spectral Surgery yields consistent gains (up to +4.4 points on CommonsenseQA and +2.4 pass@1 on HumanEval) by adjusting only $\approx 1{,}000$ scalar coefficients. These results demonstrate that SVD-structured, low-cost parameter editing can serve as a practical route to improving trained LoRA adapters in a purely post-hoc manner.
Problem

Research questions and friction points this paper is trying to address.

Low-Rank Adaptation
LoRA
singular value decomposition
parameter efficiency
post-hoc refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA
Spectral Surgery
Singular Value Decomposition
Training-Free Refinement
Gradient-Guided Reweighting
🔎 Similar Papers
No similar papers found.
Z
Zailong Tian
School of Computing and Information Systems, Singapore Management University
Y
Yanzhe Chen
School of Computing, National University of Singapore
Z
Zhuoheng Han
State Key Laboratory for Multimedia Information Processing, Peking University
Lizi Liao
Lizi Liao
Singapore Management University
Conversational AgentsMultimedia AnalysisText Mining