🤖 AI Summary
To address the high computational overhead and poor adaptability of “one-size-fits-all” test-time computation (TTC) strategies in large language model (LLM) inference, this paper proposes Reward-guided Collaborative Test-Time Computation (R-TTC). R-TTC employs a lightweight pre-trained reward model to dynamically assess query difficulty and strategy utility, enabling real-time selection among diverse TTC strategies—including retrieval-augmented generation (RAG), lightweight fine-tuning, and cache reuse—based on per-query conditions. It further integrates query-state caching and a distributed client-server architecture for cross-task efficiency and scalability. Crucially, R-TTC is the first framework to incorporate reward modeling into TTC policy scheduling, thereby eliminating redundant computation. Experiments across multiple LLMs and benchmarks demonstrate that R-TTC significantly outperforms conventional RAG and test-time training (TTT) baselines: it maintains or improves accuracy while reducing average inference FLOPs by 37%.
📝 Abstract
Test-Time Compute (TTC) has emerged as a powerful paradigm for enhancing the performance of Large Language Models (LLMs) at inference, leveraging strategies such as Test-Time Training (TTT) and Retrieval-Augmented Generation (RAG). However, the optimal adaptation strategy varies across queries, and indiscriminate application of TTC strategy incurs substantial computational overhead. In this work, we introduce Reward-Guided Test-Time Compute (RTTC), a novel framework that adaptively selects the most effective TTC strategy for each query via a pretrained reward model, maximizing downstream accuracy across diverse domains and tasks. RTTC operates in a distributed server-client architecture, retrieving relevant samples from a remote knowledge base and applying RAG or lightweight fine-tuning on client devices only when necessary. To further mitigate redundant computation, we propose Query-State Caching, which enables the efficient reuse of historical query states at both retrieval and adaptation levels. Extensive experiments across multiple LLMs and benchmarks demonstrate that RTTC consistently achieves superior accuracy compared to vanilla RAG or TTT, validating the necessity of adaptive, reward-guided TTC selection and the potential of RTTC for scalable, high-performance language model adaptation.