Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods exhibit poor robustness under noisy data. To address this, we propose LoPE—a noise-robust PEFT framework integrating asymmetric LoRA with a Mixture-of-Experts (MoE) architecture. LoPE introduces the novel “noise-to-counter-noise” paradigm: during training, a dedicated poisoned expert is activated to deliberately inject structured noise, thereby enhancing the model’s ability to discriminate and suppress noise; during inference, this expert is masked out to ensure clean, high-fidelity outputs. Leveraging a two-stage noise injection strategy and selective expert masking, LoPE achieves significant performance gains over state-of-the-art PEFT baselines across multiple noisy multitask benchmarks—without requiring data cleaning, with minimal computational overhead, and with substantially improved robustness to label and input noise.

Technology Category

Application Category

📝 Abstract
Current parameter-efficient fine-tuning methods for adapting pre-trained language models to downstream tasks are susceptible to interference from noisy data. Conventional noise-handling approaches either rely on laborious data pre-processing or employ model architecture modifications prone to error accumulation. In contrast to existing noise-process paradigms, we propose a noise-robust adaptation method via asymmetric LoRA poisoning experts (LoPE), a novel framework that enhances model robustness to noise only with generated noisy data. Drawing inspiration from the mixture-of-experts architecture, LoPE strategically integrates a dedicated poisoning expert in an asymmetric LoRA configuration. Through a two-stage paradigm, LoPE performs noise injection on the poisoning expert during fine-tuning to enhance its noise discrimination and processing ability. During inference, we selectively mask the dedicated poisoning expert to leverage purified knowledge acquired by normal experts for noise-robust output. Extensive experiments demonstrate that LoPE achieves strong performance and robustness purely through the low-cost noise injection, which completely eliminates the requirement of data cleaning.
Problem

Research questions and friction points this paper is trying to address.

Enhances model robustness to noisy data interference
Eliminates need for laborious data cleaning processes
Uses asymmetric LoRA with poisoning expert for noise handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymmetric LoRA with poisoning expert
Noise injection enhances robustness
Selective masking during inference
🔎 Similar Papers
No similar papers found.