🤖 AI Summary
Large language models (LLMs) often fail to generate correct answers when all candidate responses are incorrect, lacking mechanisms to synthesize accurate solutions from erroneous reasoning. Method: This paper proposes Synthesizer, a Chain-of-Thought (CoT)-based framework that—without requiring any correct candidates—extracts complementary reasoning signals from multiple incorrect responses and synthesizes accurate answers. It comprises multi-candidate CoT analysis, response fusion, automated high-quality data synthesis, and lightweight supervised fine-tuning of small models. The approach is applicable to both closed-source API models (e.g., GPT-4o) and open-weight LLMs (e.g., Llama3-8B). Contribution/Results: On four rigorous reasoning benchmarks—including MATH—Synthesizer improves accuracy by 11.8% for Llama3-8B and 10.3% for GPT-4o. All code and data are publicly released.
📝 Abstract
Current inference scaling methods, such as Self-consistency and Best-of-N, have proven effective in improving the accuracy of LLMs on complex reasoning tasks. However, these methods rely heavily on the quality of candidate responses and are unable to produce correct answers when all candidates are incorrect. In this paper, we propose a novel inference scaling strategy, CoT-based Synthesizer, which leverages CoT reasoning to synthesize superior answers by analyzing complementary information from multiple candidate responses, even when all candidate responses are flawed. To enable a lightweight and cost-effective implementation, we introduce an automated data generation pipeline that creates diverse training data. This allows smaller LLMs trained on this data to improve the inference accuracy of larger models, including API-based LLMs. Experimental results across four benchmark datasets with seven policy models demonstrate that our method significantly enhances performance, with gains of 11.8% for Llama3-8B and 10.3% for GPT-4o on the MATH dataset. The corresponding training data and code are publicly available on https://github.com/RUCKBReasoning/CoT-based-Synthesizer.