A Survey on Prompt Tuning

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper presents a systematic survey of prompt tuning—a parameter-efficient paradigm for adapting pre-trained language models—focusing specifically on the setting where the backbone model is frozen and only continuous, prefix-based prompt embeddings are optimized. Addressing key challenges including computational inefficiency and training instability, the work introduces the first unified taxonomy encompassing encoder-based, low-rank decomposition, and mixture-of-experts prompt tuning methods, rigorously distinguishing direct prompt learning from transferable prompt learning. Through methodological analysis and visualized performance comparisons across diverse benchmarks, it characterizes fundamental trade-offs among parameter count, optimization convergence, and generalization capability. The study provides both theoretical insights and practical guidelines for enhancing training robustness and extending prompt tuning to multi-task and low-resource scenarios.

Technology Category

Application Category

📝 Abstract
This survey reviews prompt tuning, a parameter-efficient approach for adapting language models by prepending trainable continuous vectors while keeping the model frozen. We classify existing approaches into two categories: direct prompt learning and transfer learning. Direct prompt learning methods include: general optimization approaches, encoder-based methods, decomposition strategies, and mixture-of-experts frameworks. Transfer learning methods consist of: general transfer approaches, encoder-based methods, and decomposition strategies. For each method, we analyze method designs, innovations, insights, advantages, and disadvantages, with illustrative visualizations comparing different frameworks. We identify challenges in computational efficiency and training stability, and discuss future directions in improving training robustness and broadening application scope.
Problem

Research questions and friction points this paper is trying to address.

Survey reviews prompt tuning for adapting language models efficiently
Classifies approaches into direct prompt learning and transfer learning
Identifies challenges in computational efficiency and training stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable continuous vectors prepended to frozen models
Direct prompt learning with optimization and decomposition
Transfer learning methods for broader application scope