🤖 AI Summary
This work addresses the limited reasoning and planning capabilities in code generation by proposing Code World Model (CWM), the first open-source 32B-parameter large language model to integrate the world model paradigm into code intelligence. CWM undergoes mid-training via agent-based Docker environments, jointly executing Python interpreter observations and actions to learn executable trajectories; it further combines supervised fine-tuning with multi-task reinforcement learning—covering verifiable coding, mathematical reasoning, and multi-turn software engineering tasks. Its pure-decoder architecture supports a 131K-context window, enabling stepwise simulation of dynamic execution environments and causal reasoning. Experiments demonstrate substantial improvements over baselines: 65.8% pass@1 on SWE-bench Verified, 68.6% on LiveCodeBench, 96.6% on Math-500, and 76.0% on AIME 2024. These results validate the efficacy of world modeling for code-level reasoning. All training checkpoints are publicly released to advance research in agentic programming.
📝 Abstract
We release Code World Model (CWM), a 32-billion-parameter open-weights LLM, to advance research on code generation with world models. To improve code understanding beyond what can be learned from training on static code alone, we mid-train CWM on a large amount of observation-action trajectories from Python interpreter and agentic Docker environments, and perform extensive multi-task reasoning RL in verifiable coding, math, and multi-turn software engineering environments. With CWM, we provide a strong testbed for researchers to explore the opportunities world modeling affords for improving code generation with reasoning and planning in computational environments. We present first steps of how world models can benefit agentic coding, enable step-by-step simulation of Python code execution, and show early results of how reasoning can benefit from the latter. CWM is a dense, decoder-only LLM trained with a context size of up to 131k tokens. Independent of its world modeling capabilities, CWM offers strong performance on general coding and math tasks: it reaches pass@1 scores of 65.8% on SWE-bench Verified (with test-time scaling), 68.6% on LiveCodeBench, 96.6% on Math-500, and 76.0% on AIME 2024. To support further research on code world modeling, we release model checkpoints after mid-training, SFT, and RL.