Neutral residues: revisiting adapters for model extension

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between improving performance on unseen domains (e.g., entirely new languages) and preserving performance on original domains during large language model adaptation, this paper proposes the Neutral Residual Adapter (NRA). NRA incorporates MoE-inspired gating to design domain-aware neutral residual blocks, ensuring that adapter outputs approach zero on the original domain—thus enabling high-capacity expansion without catastrophic forgetting. With only 20% additional trainable parameters, NRA significantly outperforms full fine-tuning, LoRA, and standard adapters on new-language tasks, while sustaining near-original English benchmark performance (average degradation <0.3%). This marks the first approach to simultaneously achieve efficient cross-domain generalization and strict forgetting suppression under low-parameter overhead.

Technology Category

Application Category

📝 Abstract
We address the problem of extending a pretrained large language model to a new domain that was not seen at training time, like adding a language for which the original model has seen no or little training data. Popular solutions like fine-tuning or low-rank adaptation are successful at domain adaptation, but formally they do not add any extra capacity and degrade the performance in the original domain. Our paper analyzes this extension problem under three angles: data, architecture and training procedure, which are advantageously considered jointly. In particular, we improve adapters and make it possible to learn an entire new language while ensuring that the output of the neural network is almost unchanged in the original domain. For this purpose, we modify the new residual blocks in a way that leads each new residual block to output near-zeros in the original domain. This solution of neutral residues, which borrows architectural components from mixture of experts, is effective: with only 20% extra learnable weights compared to an original model trained on English, we get results that are significantly better than concurrent approaches (fine-tuning, low-rank or vanilla adapters) in terms of the trade-off between learning a new language and not forgetting English.
Problem

Research questions and friction points this paper is trying to address.

Extending pretrained LLMs to unseen domains without performance loss
Improving adapters via data, architecture, and training jointly
Enhancing new language adaptation while preserving original language ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improves adapters via data, architecture, training
Neutral residues output near-zeros on original domain
Outperforms finetuning, LoRA, vanilla adapters
🔎 Similar Papers
No similar papers found.