CorDA: Context-Oriented Decomposition Adaptation of Large Language Models

📅 2024-06-07
🏛️ Neural Information Processing Systems
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of insufficient task-context modeling and catastrophic forgetting of pretrained knowledge in parameter-efficient fine-tuning (PEFT), this paper proposes a context-guided weight decomposition adaptation method. Our approach performs singular value decomposition (SVD) on linear-layer weights—oriented by the input activation covariance matrix—to yield an interpretable, low-rank decomposition. Building upon this, we introduce a dual-path fine-tuning paradigm: low-singular-value components preserve general-purpose pretrained knowledge, while high-singular-value components are selectively adapted to downstream instructions. Component-wise freezing and updating are driven by the covariance structure, enabling precise knowledge retention and task-specific adaptation. Experiments across mathematical reasoning, code generation, and instruction-following benchmarks demonstrate substantial improvements over state-of-the-art PEFT methods—achieving performance close to full fine-tuning while effectively mitigating catastrophic forgetting.

Technology Category

Application Category

📝 Abstract
Current parameter-efficient fine-tuning (PEFT) methods build adapters widely agnostic of the context of downstream task to learn, or the context of important knowledge to maintain. As a result, there is often a performance gap compared to full-parameter fine-tuning, and meanwhile the fine-tuned model suffers from catastrophic forgetting of the pre-trained world knowledge. In this paper, we propose CorDA, a Context-oriented Decomposition Adaptation method that builds learnable task-aware adapters from weight decomposition oriented by the context of downstream task or the world knowledge to maintain. Concretely, we collect a few data samples, and perform singular value decomposition for each linear layer of a pre-trained LLM multiplied by the covariance matrix of the input activation using these samples. The inverse of the covariance matrix is multiplied with the decomposed components to reconstruct the original weights. By doing so, the context of the representative samples is captured through deciding the factorizing orientation. Our method enables two options, the knowledge-preserved adaptation and the instruction-previewed adaptation. For the former, we use question-answering samples to obtain the covariance matrices, and use the decomposed components with the smallest $r$ singular values to initialize a learnable adapter, with the others frozen such that the world knowledge is better preserved. For the latter, we use the instruction data from the fine-tuning task, such as math or coding, to orientate the decomposition and train the largest $r$ components that most correspond to the task to learn. We conduct extensive experiments on Math, Code, and Instruction Following tasks.
Problem

Research questions and friction points this paper is trying to address.

Improves parameter-efficient fine-tuning for task-aware adaptation.
Reduces catastrophic forgetting of pre-trained world knowledge.
Enhances performance using context-oriented decomposition methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-oriented task-aware adapters for fine-tuning
Singular value decomposition with covariance matrices
Knowledge-preserved and instruction-previewed adaptation options
🔎 Similar Papers
No similar papers found.
Y
Yibo Yang
King Abdullah University of Science and Technology (KAUST)
X
Xiaojie Li
Harbin Institute of Technology (Shenzhen), Peng Cheng Laboratory
Zhongzhu Zhou
Zhongzhu Zhou
Ph.D. Candidate at the University of Sydney
MLSysEfficient MLHardware/Software Codesign
S
S. Song
University of Sydney
Jianlong Wu
Jianlong Wu
Professor, Harbin Institute of Technology (Shenzhen)
Computer VisionMultimodal Learning
L
Liqiang Nie
Harbin Institute of Technology (Shenzhen)
Bernard Ghanem
Bernard Ghanem
Professor, King Abdullah University of Science and Technology
computer visionmachine learning