๐ค AI Summary
This work addresses the fundamental question of why prefix-tuning and prompt-tuning achieve high efficiency. From a statistical learning perspective, we reveal their theoretical underpinnings. We formally prove that the reparameterization in prefix-tuning is not merely an empirical engineering heuristic, but instead imposes an implicit structural constraint on key/value vectorsโenabling cross-layer parameter sharing and substantially improving sample efficiency. Leveraging a mixture-of-experts modeling framework and rigorous parameter estimation error analysis, we provide a unified theoretical explanation for the success of both prompting paradigms. Empirical evaluation across vision-language multitask benchmarks demonstrates that these methods attain performance comparable to full-parameter fine-tuning. All theoretical claims are systematically validated through comprehensive experiments. The implementation is publicly available.
๐ Abstract
Prompt-based techniques, such as prompt-tuning and prefix-tuning, have gained prominence for their efficiency in fine-tuning large pre-trained models. Despite their widespread adoption, the theoretical foundations of these methods remain limited. For instance, in prefix-tuning, we observe that a key factor in achieving performance parity with full fine-tuning lies in the reparameterization strategy. However, the theoretical principles underpinning the effectiveness of this approach have yet to be thoroughly examined. Our study demonstrates that reparameterization is not merely an engineering trick but is grounded in deep theoretical foundations. Specifically, we show that the reparameterization strategy implicitly encodes a shared structure between prefix key and value vectors. Building on recent insights into the connection between prefix-tuning and mixture of experts models, we further illustrate that this shared structure significantly improves sample efficiency in parameter estimation compared to non-shared alternatives. The effectiveness of prefix-tuning across diverse tasks is empirically confirmed to be enhanced by the shared structure, through extensive experiments in both visual and language domains. Additionally, we uncover similar structural benefits in prompt-tuning, offering new perspectives on its success. Our findings provide theoretical and empirical contributions, advancing the understanding of prompt-based methods and their underlying mechanisms. Our code is publicly available at https://github.com/Minhchuyentoancbn/ReparamPrefix