High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning

📅 2026-01-12
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes SMoA, a high-rank structured modulation adapter that overcomes the limitations of low-rank adaptation methods such as LoRA, which are constrained by their low-rank assumption and often underperform full fine-tuning in capturing task-specific features. By selectively amplifying or suppressing key features across multiple subspaces while keeping the pre-trained weights frozen, SMoA enables high-rank parameter updates with fewer trainable parameters, thereby breaking the low-rank bottleneck and significantly enhancing model representational capacity. Evaluated on ten downstream tasks, SMoA consistently outperforms LoRA and its variants, and ablation studies confirm the effectiveness and superiority of its structural design.

Technology Category

Application Category

📝 Abstract
As the number of model parameters increases, parameter-efficient fine-tuning (PEFT) has become the go-to choice for tailoring pre-trained large language models. Low-rank Adaptation (LoRA) uses a low-rank update method to simulate full parameter fine-tuning, which is widely used to reduce resource requirements. However, decreasing the rank encounters challenges with limited representational capacity when compared to full parameter fine-tuning. We present \textbf{SMoA}, a high-rank \textbf{S}tructured \textbf{MO}dulation \textbf{A}dapter that uses fewer trainable parameters while maintaining a higher rank, thereby improving the model's representational capacity and offering improved performance potential. The core idea is to freeze the original pretrained weights and selectively amplify or suppress important features of the original weights across multiple subspaces. The subspace mechanism provides an efficient way to increase the capacity and complexity of a model. We conduct both theoretical analyses and empirical studies on various tasks. Experiment results show that SMoA outperforms LoRA and its variants on 10 tasks, with extensive ablation studies validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

parameter-efficient fine-tuning
low-rank adaptation
representational capacity
large language models
model adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

parameter-efficient fine-tuning
high-rank adaptation
structured modulation
subspace mechanism
LoRA
🔎 Similar Papers
No similar papers found.
Yongkang Liu
Yongkang Liu
Northeastern University
Natural Language Processing
X
Xing Li
Northeastern University, China
M
Mengjie Zhao
Northeastern University, China
S
Shanru Zhang
Northeastern University, China
Z
Zijing Wang
Northeastern University, China
Qian Li
Qian Li
University of Science and Technology of China & UC Berkeley
S
Shi Feng
Northeastern University, China
Feiliang Ren
Feiliang Ren
Northeastern University
machine translationtext mining
D
Daling Wang
Northeastern University, China
Hinrich Schütze
Hinrich Schütze
University of Munich
natural language processing