SEER: Enhancing Chain-of-Thought Code Generation through Self-Exploring Deep Reasoning

πŸ“… 2025-10-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing chain-of-thought (CoT) code generation methods suffer from three key limitations: (1) reliance on a single, rigid reasoning path; (2) absence of real-time quality assessment for intermediate reasoning steps; and (3) susceptibility to redundant, overly complex β€œover-reasoning.” To address these, we formulate CoT as a sequential decision-making problem and propose SEERβ€”a unified framework comprising: (1) a self-exploration mechanism enabling label-free, diverse reasoning path search; (2) a lightweight value model for online quality evaluation of intermediate steps; and (3) an adaptive controller that dynamically switches between direct generation and stepwise reasoning modes. SEER jointly optimizes the policy and value models via reinforcement learning. Empirical evaluation across multiple code generation benchmarks demonstrates substantial improvements in correctness and generalization, while effectively mitigating over-reasoning and enhancing both reasoning path diversity and output reliability.

Technology Category

Application Category

πŸ“ Abstract
Code generation, the task of creating executable programs from natural language requirements, has recently seen tremendous advances through Chain-of-Thought (CoT) reasoning, which enables Large Language Models (LLMs) to develop high-level reasoning plans before writing code. Recent research has proposed various methods to enhance models' CoT reasoning for code generation such as prompt engineering and supervised fine-tuning. However, existing approaches still face three critical limitations: (1) limited exploration of diverse reasoning paths, which constrains generalization across various programming scenarios, (2) lack of quality assessment for intermediate reasoning steps, which hampers the reliability of the generated plans and code, and (3) the potential negative impact of "overthinking", potentially leading to unnecessarily complex and incorrect solutions. To address these limitations, we frame CoT code generation as a decision making problem and present SEER, a SElf-Exploring deep Reasoning framework that enables accurate and adaptive reasoning for code generation. SEER introduces three key components: (1) Diverse reasoning path exploration, which aims at exploring diverse reasoning paths and annotating intermediate steps without relying on manual experts or closed-source proprietary models; (2) Reasoning quality-aware model training, which trains a policy model for generating candidate reasoning steps and a value model for assessing their quality; and (3) Adaptive CoT reasoning, which dynamically switches between direct generation and step-by-step reasoning for different problems.
Problem

Research questions and friction points this paper is trying to address.

Limited exploration of diverse reasoning paths in code generation
Lack of quality assessment for intermediate reasoning steps
Potential negative impact of overthinking in reasoning processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

SEER framework enables self-exploring diverse reasoning paths
Trains policy and value models for reasoning quality assessment
Adaptively switches between direct and step-by-step reasoning
πŸ”Ž Similar Papers
No similar papers found.