🤖 AI Summary
This work addresses the prefill bottleneck in retrieval-augmented generation caused by concatenating multiple documents, as well as the loss of cross-document reasoning when using separate key-value caches. To resolve this trade-off without requiring additional training, the authors propose a parallel context-expert decoding framework that shifts evidence aggregation from the attention mechanism to the decoding stage. By employing isolated document-expert modeling and a retrieval-aware contrastive decoding strategy, the method restores cross-document interactions without constructing a shared attention context. This approach substantially alleviates prefill computational overhead while maintaining efficient generation and significantly improving the quality of multi-document semantic integration.
📝 Abstract
Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated"experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.