Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing chain-of-thought (CoT) methods suffer from uncontrolled reasoning, low reasoning quality, and poor diversity; code-augmented CoT approaches further exhibit limited generalization. To address these limitations, we propose Caco—a novel code-driven CoT paradigm that enables bidirectional mapping between executable code and natural-language reasoning traces. Caco integrates program execution verification, rule-based filtering, and instruction reverse engineering to establish a self-sustaining, closed-loop reasoning data synthesis pipeline. Leveraging this framework, we construct the Caco-1.3M dataset comprising 1.3 million high-quality reasoning samples. Fine-tuning models on this dataset achieves significant improvements over strong baselines across multiple mathematical reasoning benchmarks, markedly enhancing logical consistency and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Reasoning capability is pivotal for Large Language Models (LLMs) to solve complex tasks, yet achieving reliable and scalable reasoning remains challenging. While Chain-of-Thought (CoT) prompting has become a mainstream approach, existing methods often suffer from uncontrolled generation, insufficient quality, and limited diversity in reasoning paths. Recent efforts leverage code to enhance CoT by grounding reasoning in executable steps, but such methods are typically constrained to predefined mathematical problems, hindering scalability and generalizability. In this work, we propose Caco (Code-Assisted Chain-of-ThOught), a novel framework that automates the synthesis of high-quality, verifiable, and diverse instruction-CoT reasoning data through code-driven augmentation. Unlike prior work, Caco first fine-tunes a code-based CoT generator on existing math and programming solutions in a unified code format, then scales the data generation to a large amount of diverse reasoning traces. Crucially, we introduce automated validation via code execution and rule-based filtering to ensure logical correctness and structural diversity, followed by reverse-engineering filtered outputs into natural language instructions and language CoTs to enrich task adaptability. This closed-loop process enables fully automated, scalable synthesis of reasoning data with guaranteed executability. Experiments on our created Caco-1.3M dataset demonstrate that Caco-trained models achieve strong competitive performance on mathematical reasoning benchmarks, outperforming existing strong baselines. Further analysis reveals that Caco's code-anchored verification and instruction diversity contribute to superior generalization across unseen tasks. Our work establishes a paradigm for building self-sustaining, trustworthy reasoning systems without human intervention.
Problem

Research questions and friction points this paper is trying to address.

Achieving reliable and scalable reasoning in Large Language Models
Addressing uncontrolled generation and limited diversity in reasoning paths
Enhancing Chain-of-Thought methods beyond predefined mathematical problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated synthesis of verifiable reasoning data
Code-driven augmentation for diverse instruction generation
Closed-loop process with execution-based validation
🔎 Similar Papers
No similar papers found.