TT-LoRA MoE: Unifying Parameter-Efficient Fine-Tuning and Sparse Mixture-of-Experts

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address scalability bottlenecks arising from the coupling of Parameter-Efficient Fine-Tuning (PEFT) and Mixture-of-Experts (MoE) in large-model multi-task deployment, this paper proposes TT-LoRA MoE—a novel two-stage decoupled training framework. First, the backbone is frozen while expert adapters are trained independently via tensor-train (TT)-decomposed low-rank adaptation (TT-LoRA); second, the sparse router is optimized separately, eliminating cross-task interference and catastrophic forgetting. Crucially, it enables dynamic expert selection without explicit task identifiers. By integrating TT decomposition, LoRA, and sparse MoE routing, TT-LoRA MoE introduces only 2% additional parameters over LoRA, 0.3% over Adapters, and 0.03% over AdapterFusion. In multi-task joint inference, it outperforms AdapterFusion by 4 points on average, while delivering substantial memory savings and strong horizontal scalability.

Technology Category

Application Category

📝 Abstract
We propose Tensor-Trained Low-Rank Adaptation Mixture of Experts (TT-LoRA MoE), a novel computational framework integrating Parameter-Efficient Fine-Tuning (PEFT) with sparse MoE routing to address scalability challenges in large model deployments. Unlike traditional MoE approaches, which face substantial computational overhead as expert counts grow, TT-LoRA MoE decomposes training into two distinct, optimized stages. First, we independently train lightweight, tensorized low-rank adapters (TT-LoRA experts), each specialized for specific tasks. Subsequently, these expert adapters remain frozen, eliminating inter-task interference and catastrophic forgetting in multi-task setting. A sparse MoE router, trained separately, dynamically leverages base model representations to select exactly one specialized adapter per input at inference time, automating expert selection without explicit task specification. Comprehensive experiments confirm our architecture retains the memory efficiency of low-rank adapters, seamlessly scales to large expert pools, and achieves robust task-level optimization. This structured decoupling significantly enhances computational efficiency and flexibility: uses only 2% of LoRA, 0.3% of Adapters and 0.03% of AdapterFusion parameters and outperforms AdapterFusion by 4 value in multi-tasking, enabling practical and scalable multi-task inference deployments.
Problem

Research questions and friction points this paper is trying to address.

Integrates PEFT with sparse MoE for scalable large model deployment
Trains lightweight tensorized adapters to prevent task interference
Uses sparse MoE router for dynamic expert selection efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates tensorized low-rank adapters with MoE
Decouples expert training and sparse routing
Dynamically selects one expert per input
🔎 Similar Papers
No similar papers found.