๐ค AI Summary
Large language models often output only final answers while concealing their full reasoning processes, limiting the reusability and transferability of their reasoning capabilities. This work proposes a trajectory inversion model that synthesizes high-quality reasoning trajectories using only the target modelโs inputs, answers, and optional summaries, enabling effective supervised fine-tuning of student models. For the first time, this approach demonstrates that the reasoning capabilities of black-box models can be successfully โextractedโ and transferred without access to original reasoning chains, thereby eliminating the dependency on visible intermediate reasoning steps. Experimental results show that Qwen-2.5-7B-Instruct fine-tuned with inverted trajectories achieves substantial performance gains, with accuracy improving from 56.8% to 77.6% on MATH500 and from 11.7% to 42.3% on JEEBench.
๐ Abstract
Many large language models (LLMs) use reasoning to generate responses but do not reveal their full reasoning traces (a.k.a. chains of thought), instead outputting only final answers and brief reasoning summaries. To demonstrate that hiding reasoning traces does not prevent users from"stealing"a model's reasoning capabilities, we introduce trace inversion models that, given only the inputs, answers, and (optionally) reasoning summaries exposed by a target model, generate detailed, synthetic reasoning traces. We show that (1) traces synthesized by trace inversion have high overlap with the ground-truth reasoning traces (when available), and (2) fine-tuning student models on inverted traces substantially improves their reasoning. For example, fine-tuning Qwen-2.5-7B-Instruct on traces inverted from the answers and summaries of GPT-5 mini, a commercial black-box LLM, improves its performance from 56.8% to 77.6% on MATH500 and from 11.7% to 42.3% on JEEBench, compared to fine-tuning on just the answers and summaries.