Self-Training Large Language Models with Confident Reasoning

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from high annotation costs for reasoning-path supervision, as conventional self-training methods rely solely on answer-level correctness as the supervisory signal. Method: This paper proposes a confidence-aware self-training framework that operates at the reasoning-path level—introducing, for the first time, fine-grained confidence modeling over intermediate reasoning steps. We design the CORE-PO strategy optimization framework, which jointly performs multi-path sampling, confidence-based ranking, pseudo-label generation, and policy reinforcement to iteratively identify and refine high-quality reasoning trajectories. Contribution/Results: Evaluated on four in-distribution and two out-of-distribution benchmarks, our approach consistently outperforms existing self-training methods, achieving substantial improvements in reasoning accuracy across all settings.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown impressive performance by generating reasoning paths before final answers, but learning such a reasoning path requires costly human supervision. To address this issue, recent studies have explored self-training methods that improve reasoning capabilities using pseudo-labels generated by the LLMs themselves. Among these, confidence-based self-training fine-tunes LLMs to prefer reasoning paths with high-confidence answers, where confidence is estimated via majority voting. However, such methods exclusively focus on the quality of the final answer and may ignore the quality of the reasoning paths, as even an incorrect reasoning path leads to a correct answer by chance. Instead, we advocate the use of reasoning-level confidence to identify high-quality reasoning paths for self-training, supported by our empirical observations. We then propose a new self-training method, CORE-PO, that fine-tunes LLMs to prefer high-COnfidence REasoning paths through Policy Optimization. Our experiments show that CORE-PO improves the accuracy of outputs on four in-distribution and two out-of-distribution benchmarks, compared to existing self-training methods.
Problem

Research questions and friction points this paper is trying to address.

Reducing human supervision in LLM reasoning training
Improving reasoning path quality in self-training methods
Enhancing output accuracy across multiple benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-training LLMs with confident reasoning paths
Estimating confidence via majority voting
Fine-tuning LLMs using Policy Optimization
🔎 Similar Papers
No similar papers found.