Task Prompt Vectors: Effective Initialization through Multi-Task Soft-Prompt Transfer

📅 2024-08-02
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing soft prompt tuning methods lack multi-task modularity: adding new tasks necessitates retraining from scratch and prohibits arithmetic operations at the prompt level. To address this, we propose *task prompt vectors*—defined as the element-wise difference between a fine-tuned soft prompt and its randomly initialized counterpart—enabling the first soft-prompt-level task vectorization and additive composition. These vectors exhibit cross-task and cross-architecture invariance (e.g., BERT, LLaMA-2), supporting zero-shot or low-resource multi-task prompt initialization and combination. Evaluated on 12 NLU benchmarks, our approach enables effective new-task prompt initialization with only a few samples; moreover, additive composition of task prompt vectors consistently outperforms state-of-the-art baselines, demonstrating both computational efficiency and strong generalization across diverse tasks and models.

Technology Category

Application Category

📝 Abstract
Prompt tuning is an efficient solution for training large language models (LLMs). However, current soft-prompt-based methods often sacrifice multi-task modularity, requiring the training process to be fully or partially repeated for each newly added task. While recent work on task vectors applied arithmetic operations on full model weights to achieve the desired multi-task performance, a similar approach for soft-prompts is still missing. To this end, we introduce Task Prompt Vectors, created by element-wise difference between weights of tuned soft-prompts and their random initialization. Experimental results on 12 NLU datasets show that task prompt vectors can be used in low-resource settings to effectively initialize prompt tuning on similar tasks. In addition, we show that task prompt vectors are independent of the random initialization of prompt tuning on 2 different language model architectures. This allows prompt arithmetics with the pre-trained vectors from different tasks. In this way, we provide a competitive alternative to state-of-the-art baselines by arithmetic addition of task prompt vectors from multiple tasks.
Problem

Research questions and friction points this paper is trying to address.

Enables multi-task modularity in soft-prompt tuning
Avoids retraining for new tasks via prompt vectors
Supports arithmetic operations on task prompt vectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task Prompt Vectors enable multi-task soft-prompt transfer
Element-wise difference initializes prompt tuning effectively
Arithmetic addition combines task vectors for performance
🔎 Similar Papers
No similar papers found.
R
Robert Belanec
Faculty of Information Technology, Brno University of Technology, Brno, Czechia
S
Simon Ostermann
German Research Institute for Artificial Intelligence (DFKI), Saarland Informatics Campus, Germany
Ivan Srba
Ivan Srba
Kempelen Institute of Intelligent Technologies
AIMachine LearningNatural Language ProcessingSocial ComputingDisinformation
Maria Bielikova
Maria Bielikova
Kempelen Institute of Intelligent Technologies
artificial intelligencemachine learningrecommendationuser modellinghuman-computer interaction