CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak generalization on sparse, fragmented reasoning tasks—such as symbolic, scientific, logical, mathematical, and commonsense reasoning—due to insufficient grounding in structured, executable reasoning processes. Method: This paper introduces CodeI/O++, a framework that leverages real-world code input-output prediction as a surrogate for abstract reasoning, deliberately abstracting away syntactic programming details to distill universal reasoning primitives (e.g., logical planning, state-space search, decision-tree traversal). It incorporates a verifiable, iterative multi-turn Chain-of-Thought (CoT) refinement mechanism, integrating natural-language test-case modeling, bidirectional I/O prediction training, and code re-execution for validation. Results: CodeI/O++ achieves consistent and significant improvements across diverse reasoning benchmarks. Further performance gains are observed after multiple rounds of CoT refinement. All data, models, and code are publicly released.

Technology Category

Application Category

📝 Abstract
Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many other reasoning tasks remains challenging due to sparse and fragmented training data. To address this issue, we propose CodeI/O, a novel approach that systematically condenses diverse reasoning patterns inherently embedded in contextually-grounded codes, through transforming the original code into a code input-output prediction format. By training models to predict inputs/outputs given code and test cases entirely in natural language as Chain-of-Thought (CoT) rationales, we expose them to universal reasoning primitives -- like logic flow planning, state-space searching, decision tree traversal, and modular decomposition -- while decoupling structured reasoning from code-specific syntax and preserving procedural rigor. Experimental results demonstrate CodeI/O leads to consistent improvements across symbolic, scientific, logic, math&numerical, and commonsense reasoning tasks. By matching the existing ground-truth outputs or re-executing the code with predicted inputs, we can verify each prediction and further enhance the CoTs through multi-turn revision, resulting in CodeI/O++ and achieving higher performance. Our data and models are available at https://github.com/hkust-nlp/CodeIO.
Problem

Research questions and friction points this paper is trying to address.

Enhances reasoning in LLMs
Condenses diverse reasoning patterns
Improves performance across reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transforms code into input-output prediction
Uses natural language for reasoning primitives
Enhances Chain-of-Thought through multi-turn revision
🔎 Similar Papers
No similar papers found.