🤖 AI Summary
Existing code generation methods (e.g., CodeAct) suffer from fragmented reasoning in complex tasks, leading to inconsistent code and unstable execution; moreover, their reliance on manually annotated action-level ground truth (GT) hinders reliable supervision and termination detection.
Method: We propose CodeProgram—a novel end-to-end paradigm—and Tree-of-Code (ToC), an unsupervised, self-growing framework that dynamically constructs a tree-structured search space of executable code paths. ToC enables parallel multi-branch exploration, zero-shot inference, and self-generated data training—achieving GT-free self-supervised learning and automatic termination via executability-driven guidance.
Results: Experiments across two benchmarks and ten zero-shot LLMs show our method improves accuracy by nearly 20% over CodeAct, reduces interaction turns by over 75%, and enables several models to surpass their multi-turn performance in a single turn.
📝 Abstract
Solving complex reasoning tasks is a key real-world application of agents. Thanks to the pretraining of Large Language Models (LLMs) on code data, recent approaches like CodeAct successfully use code as LLM agents' action, achieving good results. However, CodeAct greedily generates the next action's code block by relying on fragmented thoughts, resulting in inconsistency and instability. Moreover, CodeAct lacks action-related ground-truth (GT), making its supervision signals and termination conditions questionable in multi-turn interactions. To address these issues, we first introduce a simple yet effective end-to-end code generation paradigm, CodeProgram, which leverages code's systematic logic to align with global reasoning and enable cohesive problem-solving. Then, we propose Tree-of-Code (ToC), which self-grows CodeProgram nodes based on the executable nature of the code and enables self-supervision in a GT-free scenario. Experimental results on two datasets using ten popular zero-shot LLMs show ToC remarkably boosts accuracy by nearly 20% over CodeAct with less than 1/4 turns. Several LLMs even perform better on one-turn CodeProgram than on multi-turn CodeAct. To further investigate the trade-off between efficacy and efficiency, we test different ToC tree sizes and exploration mechanisms. We also highlight the potential of ToC's end-to-end data generation for supervised and reinforced fine-tuning.