RTTC: Reward-Guided Collaborative Test-Time Compute

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and poor adaptability of “one-size-fits-all” test-time computation (TTC) strategies in large language model (LLM) inference, this paper proposes Reward-guided Collaborative Test-Time Computation (R-TTC). R-TTC employs a lightweight pre-trained reward model to dynamically assess query difficulty and strategy utility, enabling real-time selection among diverse TTC strategies—including retrieval-augmented generation (RAG), lightweight fine-tuning, and cache reuse—based on per-query conditions. It further integrates query-state caching and a distributed client-server architecture for cross-task efficiency and scalability. Crucially, R-TTC is the first framework to incorporate reward modeling into TTC policy scheduling, thereby eliminating redundant computation. Experiments across multiple LLMs and benchmarks demonstrate that R-TTC significantly outperforms conventional RAG and test-time training (TTT) baselines: it maintains or improves accuracy while reducing average inference FLOPs by 37%.

Technology Category

Application Category

📝 Abstract
Test-Time Compute (TTC) has emerged as a powerful paradigm for enhancing the performance of Large Language Models (LLMs) at inference, leveraging strategies such as Test-Time Training (TTT) and Retrieval-Augmented Generation (RAG). However, the optimal adaptation strategy varies across queries, and indiscriminate application of TTC strategy incurs substantial computational overhead. In this work, we introduce Reward-Guided Test-Time Compute (RTTC), a novel framework that adaptively selects the most effective TTC strategy for each query via a pretrained reward model, maximizing downstream accuracy across diverse domains and tasks. RTTC operates in a distributed server-client architecture, retrieving relevant samples from a remote knowledge base and applying RAG or lightweight fine-tuning on client devices only when necessary. To further mitigate redundant computation, we propose Query-State Caching, which enables the efficient reuse of historical query states at both retrieval and adaptation levels. Extensive experiments across multiple LLMs and benchmarks demonstrate that RTTC consistently achieves superior accuracy compared to vanilla RAG or TTT, validating the necessity of adaptive, reward-guided TTC selection and the potential of RTTC for scalable, high-performance language model adaptation.
Problem

Research questions and friction points this paper is trying to address.

Adaptively selects optimal TTC strategy per query
Reduces computational overhead via reward-guided selection
Enhances LLM accuracy across diverse domains and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive TTC strategy selection via reward model
Distributed server-client architecture for RAG
Query-State Caching to reduce redundant computation
🔎 Similar Papers
No similar papers found.