🤖 AI Summary
To address poor adaptability of large language models (LLMs) in multi-task, multi-source heterogeneous settings and susceptibility of prompt tuning to interference, this paper proposes Mixture of Prompts (MoPs), an intelligent gating-driven hybrid prompting mechanism. MoPs introduces a learnable soft gating module that dynamically identifies task requirements and composes task-specific prompts, enabling fine-grained skill routing—without modifying backbone parameters—thus ensuring model efficiency and strong cross-task/cross-source generalization. Evaluated under both federated and centralized multi-task adaptation frameworks, MoPs significantly mitigates inter-prompt interference: perplexity drops by 20–70% in federated settings and 3–30% in centralized settings, consistently outperforming state-of-the-art baselines. The core innovation lies in the first integration of learnable gating into prompt mixture design, unifying task-awareness, expert collaboration, and efficient adaptation within a single framework.
📝 Abstract
Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new -- but often individual -- downstream tasks. Thus, how one would expand prompt tuning to handle -- concomitantly -- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of emph{Mixture of Prompts}, or MoPs, associated with smart gating functionality: the latter -- whose design is one of the contributions of this paper -- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied -- for efficiency reasons -- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training"interference"in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $sim20%$ up to $sim70%$, as compared to baselines, in the federated scenario, and from $sim 3%$ up to $sim30%$ in the centralized scenario.