🤖 AI Summary
In Transformer autoregressive decoding, KV cache overhead is substantial; existing skip-connection approaches struggle to jointly enhance representation quality and compress memory. Method: We propose SkipV1Former—the first method to reuse the uncompressed Value vectors from the first layer across deeper layers: starting from the second layer, 50% of its output is reused per Value head, enabling cross-layer Value head multiplexing. This design mitigates deep-layer information decay, accelerates implicit optimization, and avoids additional KV storage. SkipV1Former is compatible with advanced attention variants—including Group-Query Attention and Multi-Latent Attention—and synergizes with YOCO. Results: Evaluated on standard language modeling benchmarks, SkipV1Former alone reduces KV cache by ~25% while improving perplexity. When combined with YOCO, it achieves nearly 50% KV reduction with only 10–15% additional computation, enabling efficient model upscaling.
📝 Abstract
Transformer models have driven breakthroughs across various language tasks by their strong capability to learn rich contextual representations. Scaling them to improve representation, however, often demands substantial memory and compute costs, such as the Key-Value (KV) cache used during auto-regressive decoding. Skip connections offer a promising way to improve representation without bloating resource usage, yet most prior works either improve expressivity while leaving KV costs unchanged, or reduce memory at the cost of weaker representation. In this work, we propose SkipV1Former, a Transformer variant that uses skip connections from the first layer's Value heads to strengthen model representation and reduce KV cache. Specifically, from the second block onward, each layer reuses half of its Value heads from the very first layer, while computing the other half as usual-cutting Value projections and V cache by nearly 50 %. Theoretically, we show that routing uncompressed first-layer Values into deeper layers restores information lost to compression and accelerates the model's implicit mesa-optimization-a key pattern of Transformer in auto-regressive tasks. Empirically, across different model scales, SkipV1Former delivers consistent reductions of approximately 25 % in KV cache while improving perplexity relative to standard Multi-Head Attention (MHA) Transformers and some advanced variants. Moreover, we propose a recipe for uptraining existing MHA Transformer checkpoints to SkipV1Former with only 10-15% additional compute. Finally, SkipV1Former can seamlessly combine advanced methods like Group-Query Attention and Multi-Latent Attention to achieve further KV cache savings and performance improvement. When combined with YOCO, it cuts KV cache size by nearly 50 % while still improving performance.