SSMLoRA: Enhancing Low-Rank Adaptation with State Space Model

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LoRA suffers from sensitivity to parameter insertion locations, insufficient cross-layer feature reuse, and weak long-sequence modeling capability. To address these limitations, we propose SSMLoRA—a novel adaptation method that integrates state space models (SSMs) into the LoRA architecture for the first time. Under a sparse insertion scheme, SSMLoRA explicitly models temporal dependencies among low-rank modules, enabling dynamic cross-layer low-rank feature modeling and efficient reuse. Crucially, it preserves LoRA’s original performance while substantially improving parameter efficiency. Experiments on the GLUE benchmark show that SSMLoRA achieves accuracy comparable to standard LoRA with 50% fewer trainable parameters. Moreover, it demonstrates superior generalization and modeling efficacy on long-sequence tasks, validating its enhanced capacity for capturing extended contextual dependencies.

Technology Category

Application Category

📝 Abstract
Fine-tuning is a key approach for adapting language models to specific downstream tasks, but updating all model parameters becomes impractical as model sizes increase. Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA), address this challenge by introducing additional adaptation parameters into pre-trained weight matrices. However, LoRA's performance varies across different insertion points within the model, highlighting potential parameter inefficiency due to unnecessary insertions. To this end, we propose SSMLoRA (State Space Model Low-Rank Adaptation), an extension of LoRA that incorporates a State Space Model (SSM) to interconnect low-rank matrices. SSMLoRA ensures that performance is maintained even with sparser insertions. SSMLoRA allows the model to not only map inputs to a low-rank space for better feature extraction but also leverage the computations from the previous low-rank space. Our method achieves comparable performance to LoRA on the General Language Understanding Evaluation (GLUE) benchmark while using only half the parameters. Additionally, due to its structure, SSMLoRA shows promise in handling tasks with longer input sequences. .You can find our code here:https://github.com/yuhkalhic/SSMLoRA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Low-Rank Adaptation efficiency
Reducing parameter usage in fine-tuning
Improving performance on long input sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

SSMLoRA integrates State Space Model
Enhances LoRA with sparse insertions
Reduces parameters while maintaining performance
🔎 Similar Papers
No similar papers found.
Jiayang Yu
Jiayang Yu
Northeastern University
LLMRAGPEFT
Y
Yihang Zhang
Northeastern University, China
B
Bin Wang
Northeastern University, China
Peiqin Lin
Peiqin Lin
LMU Munich
Natural Language ProcessingMultilingualityLanguage ModelingSentiment Analysis
Y
Yongkang Liu
Northeastern University, China
S
Shi Feng
Northeastern University, China