Cache-to-Cache: Direct Semantic Communication Between Large Language Models

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM collaboration systems rely on text-based communication, suffering from semantic distortion and token-by-token generation latency. This paper proposes the “Cache-to-Cache” paradigm—the first approach enabling direct, semantics-preserving inter-LLM communication via key-value (KV) caches. It introduces a learnable gating mechanism to selectively fuse source-model KV caches, coupled with neural projection and semantic enhancement techniques to bypass textual intermediation entirely. The resulting multi-model collaborative inference architecture enables deep semantic interaction without decoding intermediate representations into text. Experiments show an average accuracy improvement of 8.5–10.5% over single-model baselines and a 3.0–5.0% gain—along with ~2.0× faster inference—over text-based communication baselines. The core contribution is establishing KV caches as a novel, principled semantic carrier for cross-model collaboration.

Technology Category

Application Category

📝 Abstract
Multi-LLM systems harness the complementary strengths of diverse Large Language Models, achieving performance and efficiency gains unattainable by a single model. In existing designs, LLMs communicate through text, forcing internal representations to be transformed into output token sequences. This process both loses rich semantic information and incurs token-by-token generation latency. Motivated by these limitations, we ask: Can LLMs communicate beyond text? Oracle experiments show that enriching the KV-Cache semantics can improve response quality without increasing cache size, supporting KV-Cache as an effective medium for inter-model communication. Thus, we propose Cache-to-Cache (C2C), a new paradigm for direct semantic communication between LLMs. C2C uses a neural network to project and fuse the source model's KV-cache with that of the target model to enable direct semantic transfer. A learnable gating mechanism selects the target layers that benefit from cache communication. Compared with text communication, C2C utilizes the deep, specialized semantics from both models, while avoiding explicit intermediate text generation. Experiments show that C2C achieves 8.5-10.5% higher average accuracy than individual models. It further outperforms the text communication paradigm by approximately 3.0-5.0%, while delivering an average 2.0x speedup in latency. Our code is available at https://github.com/thu-nics/C2C.
Problem

Research questions and friction points this paper is trying to address.

Enabling direct semantic communication between LLMs
Avoiding text generation latency and information loss
Improving multi-LLM system performance and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct semantic communication between LLMs via KV-cache projection
Learnable gating mechanism selects target layers for cache fusion
Avoids intermediate text generation for faster and more accurate responses
🔎 Similar Papers
No similar papers found.
Tianyu Fu
Tianyu Fu
Ph.D at Tsinghua University
efficient AILLMsparse computation
Z
Zihan Min
Tsinghua University
H
Hanling Zhang
The Chinese University of Hong Kong
J
Jichao Yan
Tsinghua University
Guohao Dai
Guohao Dai
Associate Professor of Shanghai Jiao Tong University
Sparse ComputationLarge-scale Graph ProcessingFPGACircuits and Systems
W
Wanli Ouyang
Shanghai AI Laboratory
Y
Yu Wang
Tsinghua University