🤖 AI Summary
Large language models (LLMs) incur high inference costs, while small language models (SLMs) struggle to replicate LLM reasoning paths—creating a trade-off between accuracy and efficiency. Method: We propose a fine-grained, token-level dynamic routing mechanism that invokes the LLM only at tokens where LLM and SLM reasoning paths diverge critically; all other tokens are generated efficiently by the SLM. We empirically discover that such path divergences are highly sparse, enabling the design of a lightweight neural router and an automated pipeline for divergence-token annotation. Using the DeepSeek-R1 family, we build a hybrid inference architecture supporting parameterized router training and fully automatic token-level label generation. Results: On mathematical reasoning, code generation, and question-answering benchmarks, our method achieves 1.6× the accuracy of R1-7B and matches R1-14B—using only 5.6B activated parameters on average—while accelerating inference by 2.8× over R1-32B, significantly advancing the Pareto frontier of test-time scaling efficiency.
📝 Abstract
Large Language Models (LLMs) achieve impressive reasoning capabilities at the cost of substantial inference overhead, posing substantial deployment challenges. Although distilled Small Language Models (SLMs) significantly enhance efficiency, their performance suffers as they fail to follow LLMs' reasoning paths. Luckily, we reveal that only a small fraction of tokens genuinely diverge reasoning paths between LLMs and SLMs. Most generated tokens are either identical or exhibit neutral differences, such as minor variations in abbreviations or expressions. Leveraging this insight, we introduce **Roads to Rome (R2R)**, a neural token routing method that selectively utilizes LLMs only for these critical, path-divergent tokens, while leaving the majority of token generation to the SLM. We also develop an automatic data generation pipeline that identifies divergent tokens and generates token-level routing labels to train the lightweight router. We apply R2R to combine R1-1.5B and R1-32B models from the DeepSeek family, and evaluate on challenging math, coding, and QA benchmarks. With an average activated parameter size of 5.6B, R2R surpasses the average accuracy of R1-7B by 1.6x, outperforming even the R1-14B model. Compared to R1-32B, it delivers a 2.8x wall-clock speedup with comparable performance, advancing the Pareto frontier of test-time scaling efficiency. Our code is available at https://github.com/thu-nics/R2R.