π€ AI Summary
Prompt tuning (PT) suffers from weak generalization, while Mixture-of-Experts (MoE) ensembles yield unstable performance gainsβkey challenges in parameter-efficient fine-tuning (PEFT). Method: This work proposes the first integration of low-rank matrix decomposition with a learnable MoE routing mechanism into the PT framework, constructing a lightweight prompt expansion architecture that enables dynamic task adaptation while sharing parameters. Key components include: (i) low-rank compression of expert parameters, (ii) end-to-end trainable MoE routing, and (iii) compact prompt embedding design. Contribution/Results: Evaluated on 17 question-answering and mathematical reasoning benchmarks, our method achieves new state-of-the-art (SOTA) performance: +1.49 F1 over PT and +2.13 over LoRA on QA tasks; +10.75 and +0.44 absolute accuracy gains on mathematical reasoning; and 25% fewer parameters than LoRA. It significantly improves cross-task consistency and generalization.
π Abstract
Parameter-efficient fine-tuning (PEFT) methods have shown promise in adapting large language models, yet existing approaches exhibit counter-intuitive phenomena: integrating router into prompt tuning (PT) increases training efficiency yet does not improve performance universally; parameter reduction through matrix decomposition can improve performance in specific domains. Motivated by these observations and the modular nature of PT, we propose PT-MoE, a novel framework that integrates matrix decomposition with mixture-of-experts (MoE) routing for efficient PT. Results across 17 datasets demonstrate that PT-MoE achieves state-of-the-art performance in both question answering (QA) and mathematical problem solving tasks, improving F1 score by 1.49 points over PT and 2.13 points over LoRA in QA tasks, while enhancing mathematical accuracy by 10.75 points over PT and 0.44 points over LoRA, all while using 25% fewer parameters than LoRA. Our analysis reveals that while PT methods generally excel in QA tasks and LoRA-based methods in math datasets, the integration of matrix decomposition and MoE in PT-MoE yields complementary benefits: decomposition enables efficient parameter sharing across experts while MoE provides dynamic adaptation, collectively enabling PT-MoE to demonstrate cross-task consistency and generalization abilities. These findings, along with ablation studies on routing mechanisms and architectural components, provide insights for future PEFT methods.