🤖 AI Summary
Large reasoning models improve accuracy via extended reasoning chains but suffer from high latency and computational cost. This paper proposes Thought Graph, a method that explicitly models salient “thought” steps from historical reasoning as a dynamic, semantically and sequentially structured graph—enabling efficient retrieval and flexible recomposition. Integrated with a reward-guided graph traversal strategy and a multi-model collaborative reasoning framework, it achieves retrieval-augmented reasoning generation. Evaluated across multiple benchmarks, Thought Graph reduces output token count by 40%, end-to-end latency by 82%, and inference cost by 59%, while strictly preserving original accuracy. Its core contribution is the first explicit formulation of the reasoning process as a structured, retrievable, reusable, and optimizable knowledge graph—transforming implicit reasoning traces into explicit, manipulable graph-structured knowledge.
📝 Abstract
Large reasoning models improve accuracy by producing long reasoning traces, but this inflates latency and cost, motivating inference-time efficiency. We propose Retrieval-of-Thought (RoT), which reuses prior reasoning as composable ``thought" steps to guide new problems. RoT organizes steps into a thought graph with sequential and semantic edges to enable fast retrieval and flexible recombination. At inference, RoT retrieves query-relevant nodes and applies reward-guided traversal to assemble a problem-specific template that guides generation. This dynamic template reuse reduces redundant exploration and, therefore, reduces output tokens while preserving accuracy. We evaluate RoT on reasoning benchmarks with multiple models, measuring accuracy, token usage, latency, and memory overhead. Findings show small prompt growth but substantial efficiency gains, with RoT reducing output tokens by up to 40%, inference latency by 82%, and cost by 59% while maintaining accuracy. RoT establishes a scalable paradigm for efficient LRM reasoning via dynamic template construction through retrieval.