🤖 AI Summary
To address the low decoding efficiency and high memory overhead of large language models (LLMs) on long-reasoning tasks, this paper proposes a dynamic segment offloading mechanism: a lightweight model (1.5B) autonomously identifies high-difficulty token segments within chain-of-thought (CoT) reasoning traces and offloads only those segments to a stronger model, while processing the remainder efficiently in-house. Key contributions include: (1) the first model-self-aware, fine-grained offloading decision mechanism; (2) the first large-scale, human-annotated reasoning difficulty dataset (18K CoT traces), enabling token-level difficulty modeling; and (3) an end-to-end differentiable offloading framework jointly optimized via supervised fine-tuning (SFT) and reinforcement learning fine-tuning (RLFT). On AIME24, offloading merely 1.35% of tokens improves accuracy by 24%, and offloading 5% yields a 28.3% gain. The code, models, dataset, and training logs are fully open-sourced.
📝 Abstract
Reasoning in large language models (LLMs) tends to produce substantially longer token generation sequences than simpler language modeling tasks. This extended generation length reflects the multi-step, compositional nature of reasoning and is often correlated with higher solution accuracy. From an efficiency perspective, longer token generation exacerbates the inherently sequential and memory-bound decoding phase of LLMs. However, not all parts of this expensive reasoning process are equally difficult to generate. We leverage this observation by offloading only the most challenging parts of the reasoning process to a larger, more capable model, while performing most of the generation with a smaller, more efficient model; furthermore, we teach the smaller model to identify these difficult segments and independently trigger offloading when needed. To enable this behavior, we annotate difficult segments across 18k reasoning traces from the OpenR1-Math-220k chain-of-thought (CoT) dataset. We then apply supervised fine-tuning (SFT) and reinforcement learning fine-tuning (RLFT) to a 1.5B-parameter reasoning model, training it to learn to offload the most challenging parts of its own reasoning process to a larger model. This approach improves AIME24 reasoning accuracy by 24% and 28.3% while offloading 1.35% and 5% of the generated tokens respectively. We open-source our SplitReason model, data, code and logs.