TARo: Token-level Adaptive Routing for LLM Test-time Alignment

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited effectiveness of existing test-time alignment methods—which primarily focus on preference alignment—in enhancing the structured reasoning capabilities of large language models. To overcome this, the authors propose a token-level adaptive routing mechanism that dynamically guides a frozen large language model during inference, coupled with a step-level reward model trained on mathematical reasoning trajectories. This approach represents the first extension of token-level test-time alignment to complex reasoning tasks, enabling strong cross-domain generalization and seamless transfer from small to large models without retraining. Experimental results demonstrate that the method improves reasoning performance by up to 22.4% over base models and outperforms current token-level alignment approaches by 8.4%, achieving significant gains in clinical reasoning and instruction-following benchmarks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit strong reasoning capabilities but typically require expensive post-training to reach high performance. Recent test-time alignment methods offer a lightweight alternative, but have been explored mainly for preference alignment rather than reasoning. To bridge this gap, we propose, Token-level Adaptive Routing (TARo), which steers frozen LLMs toward structured reasoning entirely at inference time. Specifically, we first train reward models on step-wise mathematical traces to capture fine-grained logical consistency signals, then introduce a learnable token-level router that automatically controls the guidance of the reward model to the base model. Extensive experiments show that TARo significantly improves reasoning performance by up to +22.4% over base model and +8.4% over existing token-level test-time alignment methods, while also boosting out-of-distribution clinical reasoning (MedXpertQA) and instruction following (AlpacaEval). Furthermore, TARo also generalizes from small to large backbones without retraining, extending test-time alignment from preference optimization to robust, cross-domain reasoning.
Problem

Research questions and friction points this paper is trying to address.

test-time alignment
reasoning
large language models
preference alignment
structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level Adaptive Routing
test-time alignment
reasoning enhancement
reward modeling
frozen LLM steering
🔎 Similar Papers
No similar papers found.