NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods—such as LoRA—are constrained by the linear low-rank assumption, limiting their capacity to model the nonlinear weight dynamics emerging during optimization, thereby suffering from insufficient expressivity and performance gaps. This work proposes the first nonlinear PEFT paradigm: it introduces lightweight neural networks to apply nonlinear transformations to frozen pretrained weights, explicitly modeling the cumulative update trajectory. We theoretically establish that the proposed method achieves expressivity equivalent to full-parameter fine-tuning—yet with significantly fewer parameters—while fundamentally overcoming the representational limitations of low-rank modeling. Extensive experiments across four major benchmarks and over 20 diverse datasets demonstrate that our approach consistently outperforms state-of-the-art PEFT methods (e.g., LoRA) and substantially narrows the performance gap with full-parameter fine-tuning on both vision and language tasks.

Technology Category

Application Category

📝 Abstract
Fine-tuning pre-trained models often yields state-of-the-art performance but is computationally expensive when updating all parameters. Parameter-efficient fine-tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA), address this by freezing pre-trained weights and introducing low-rank matrices. However, because LoRA relies on low-rank decomposition, it struggles to capture complex nonlinear dynamics and optimal optimization trajectories, resulting in a performance gap relative to full fine-tuning and inefficient parameter utilization. To overcome these issues, we propose NEAT, a nonlinear PEFT approach that employs a lightweight neural network to learn a nonlinear transformation of the pre-trained weights, thereby better approximating cumulative weight updates. Our theoretical analysis shows that NEAT achieves greater efficiency than LoRA while maintaining equivalent expressivity. Extensive experiments on four benchmarks and over twenty datasets demonstrate that NEAT significantly outperforms state-of-the-art baselines in both vision and text tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhances parameter-efficient fine-tuning methods
Captures complex nonlinear dynamics effectively
Improves performance in vision and text tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonlinear PEFT approach
Lightweight neural network
Efficient parameter utilization
🔎 Similar Papers
No similar papers found.