🤖 AI Summary
To address the unreliability of chain-of-thought (CoT) generation by large language models (LLMs) in complex reasoning tasks such as mathematics and programming, this paper proposes BRiTE: a framework that explicitly models implicit reasoning processes and multi-source evaluation signals via a unified probabilistic graphical model. BRiTE introduces a two-stage bootstrapping reinforcement learning algorithm for end-to-end optimization—without requiring human-annotated reasoning traces. Theoretically, it guarantees convergence at rate O(1/T). By integrating reward shaping and joint probability optimization, BRiTE significantly improves performance of base models (e.g., LLaMA, Qwen) across mathematical benchmarks (MATH, AMC) and programming benchmarks (HumanEval, MBPP), outperforming rejection sampling methods and matching or exceeding supervised fine-tuning (SFT). Its core innovations are a reasoning-guided bootstrapping mechanism and a provably efficient, probabilistic paradigm for CoT optimization.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in complex reasoning tasks, yet generating reliable reasoning processes remains a significant challenge. We present a unified probabilistic framework that formalizes LLM reasoning through a novel graphical model incorporating latent thinking processes and evaluation signals. Within this framework, we introduce the Bootstrapping Reinforced Thinking Process (BRiTE) algorithm, which works in two steps. First, it generates high-quality rationales by approximating the optimal thinking process through reinforcement learning, using a novel reward shaping mechanism. Second, it enhances the base LLM by maximizing the joint probability of rationale generation with respect to the model's parameters. Theoretically, we demonstrate BRiTE's convergence at a rate of $1/T$ with $T$ representing the number of iterations. Empirical evaluations on math and coding benchmarks demonstrate that our approach consistently improves performance across different base models without requiring human-annotated thinking processes. In addition, BRiTE demonstrates superior performance compared to existing algorithms that bootstrap thinking processes use alternative methods such as rejection sampling, and can even match or exceed the results achieved through supervised fine-tuning with human-annotated data.