🤖 AI Summary
How can a single large language model (LLM) intrinsically acquire search capability to perform autonomous, self-verifying, autoregressive deep reasoning—without relying on external retrieval or verification modules?
Method: This paper introduces Chain-of-Action-Thought (COAT), a novel paradigm that endows a 7B-parameter LLM with autonomous strategy exploration, self-reflection, and autoregressive search through two-stage training: format-guided supervised fine-tuning followed by PPO-based reinforcement learning. Crucially, the entire search process is internalized as the model’s own generative behavior.
Contribution/Results: COAT achieves state-of-the-art performance on mathematical reasoning benchmarks while demonstrating strong cross-domain generalization. We release Satori—a fully open-source, reproducible 7B model trained on open data and code—thereby advancing research on interpretable, self-reliant reasoning in LLMs.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM? This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (i.e., an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models will be fully open-sourced.