🤖 AI Summary
Transformer prompt tuning faces a fundamental memory capacity bottleneck, limiting effective long-context modeling. Method: This work formally models the information storage and retrieval mechanisms in prompt tuning from an information-theoretic perspective, incorporating architectural properties of Transformers. Contribution/Results: We theoretically prove that, under prompt tuning, Transformer memory capacity scales only linearly with prompt length—precluding superlinear expansion—and that the retrievable information is inherently bounded, irrespective of context length. This provides the first rigorous theoretical explanation for contextual degradation in large language models. Unlike prior empirical observations, our analysis establishes a quantitative relationship between memory capacity and architectural parameters (e.g., attention heads, hidden dimension, prompt length), revealing the root cause of long-text modeling limitations. The results offer principled guidance for efficient prompt design and parameter-efficient model compression.
📝 Abstract
Despite the empirical success of prompt tuning in adapting pretrained language models to new tasks, theoretical analyses of its capabilities remain limited. Existing theoretical work primarily addresses universal approximation properties, demonstrating results comparable to standard weight tuning. In this paper, we explore a different aspect of the theory of transformers: the memorization capability of prompt tuning. We provide two principal theoretical contributions. First, we prove that the amount of information memorized by a transformer cannot scale faster than linearly with the prompt length. Second, and more importantly, we present the first formal proof of a phenomenon empirically observed in large language models: performance degradation in transformers with extended contexts. We rigorously demonstrate that transformers inherently have limited memory, constraining the amount of information they can retain, regardless of the context size. This finding offers a fundamental understanding of the intrinsic limitations of transformer architectures, particularly their ability to handle long sequences.