🤖 AI Summary
This work addresses the challenge of "token overflow" in soft compression architectures, where compressed tokens may lack sufficient information to answer specific queries due to irreversible information loss, yet existing approaches lack effective detection mechanisms. To this end, we propose the first query-aware overflow detection framework that integrates a lightweight probing classifier with cross-retrieval-augmented generation (xRAG) representations of both the query and context, complemented by a novel saturation-based statistical metric to dynamically assess the informational completeness of compressed tokens. Evaluated on HotpotQA, SQuADv2, and TriviaQA, our method achieves an average AUC-ROC of 0.72, demonstrating that query-aware modeling is crucial for accurate overflow detection. This approach provides a practical pre-filtering mechanism for compressed retrieval-augmented generation systems, enhancing their reliability and efficiency.
📝 Abstract
Efficient long-context processing remains a crucial challenge for contemporary large language models (LLMs), especially in resource-constrained environments. Soft compression architectures promise to extend effective context length by replacing long token sequences with smaller sets of learned compressed tokens. Yet, the limits of compressibility -- and when compression begins to erase task-relevant content -- remain underexplored. In this paper, we define \emph{token overflow} as a regime in which compressed representations no longer contain sufficient information to answer a given query, and propose a methodology to characterize and detect it. In the xRAG soft-compression setting, we find that query-agnostic saturation statistics reliably separate compressed from uncompressed token representations, providing a practical tool for identifying compressed tokens but showing limited overflow detection capability. Lightweight probing classifiers over both query and context xRAG representations detect overflow with 0.72 AUC-ROC on average on HotpotQA, SQuADv2, and TriviaQA datasets, demonstrating that incorporating query information improves detection performance. These results advance from query-independent diagnostics to query-aware detectors, enabling low-cost pre-LLM gating to mitigate compression-induced errors.