CoSA: Compressed Sensing-Based Adaptation of Large Language Models

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing parameter-efficient fine-tuning (PEFT) methods—such as LoRA—that rely on low-rank assumptions and thus exhibit constrained expressiveness on tasks with uniformly distributed singular values. Drawing upon compressive sensing theory, the authors propose a novel weight update mechanism that transcends low-rank constraints by employing fixed random projection matrices coupled with a compact learnable core to efficiently encode and accurately reconstruct the original weight updates in a low-dimensional space. Integrating compressive sensing, random projections, and a multi-scale adaptation strategy, the method consistently matches or surpasses state-of-the-art PEFT approaches across ten diverse tasks and five large language models of varying scales, achieving both high expressivity and parameter efficiency.

Technology Category

Application Category

📝 Abstract
Parameter-Efficient Fine-Tuning (PEFT) has emerged as a practical paradigm for adapting large language models (LLMs) without updating all parameters. Most existing approaches, such as LoRA and PiSSA, rely on low-rank decompositions of weight updates. However, the low-rank assumption may restrict expressivity, particularly in task-specific adaptation scenarios where singular values are distributed relatively uniformly. To address this limitation, we propose CoSA (Compressed Sensing-Based Adaptation), a new PEFT method extended from compressed sensing theory. Instead of constraining weight updates to a low-rank subspace, CoSA expresses them through fixed random projection matrices and a compact learnable core. We provide a formal theoretical analysis of CoSA as a synthesis process, proving that weight updates can be compactly encoded into a low-dimensional space and mapped back through random projections. Extensive experimental results show that CoSA provides a principled perspective for efficient and expressive multi-scale model adaptation. Specifically, we evaluate CoSA on 10 diverse tasks, including natural language understanding and generation, employing 5 models of different scales from RoBERTa, Llama, and Qwen families. Across these settings, CoSA consistently matches or outperforms state-of-the-art PEFT methods.
Problem

Research questions and friction points this paper is trying to address.

Parameter-Efficient Fine-Tuning
Low-Rank Decomposition
Expressivity Limitation
Large Language Models
Task-Specific Adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compressed Sensing
Parameter-Efficient Fine-Tuning
Random Projection
Low-Rank Approximation
Large Language Models
🔎 Similar Papers
No similar papers found.