Tree-of-Code: A Tree-Structured Exploring Framework for End-to-End Code Generation and Execution in Complex Task Handling

📅 2024-12-19
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation methods (e.g., CodeAct) suffer from fragmented reasoning in complex tasks, leading to inconsistent code and unstable execution; moreover, their reliance on manually annotated action-level ground truth (GT) hinders reliable supervision and termination detection. Method: We propose CodeProgram—a novel end-to-end paradigm—and Tree-of-Code (ToC), an unsupervised, self-growing framework that dynamically constructs a tree-structured search space of executable code paths. ToC enables parallel multi-branch exploration, zero-shot inference, and self-generated data training—achieving GT-free self-supervised learning and automatic termination via executability-driven guidance. Results: Experiments across two benchmarks and ten zero-shot LLMs show our method improves accuracy by nearly 20% over CodeAct, reduces interaction turns by over 75%, and enables several models to surpass their multi-turn performance in a single turn.

Technology Category

Application Category

📝 Abstract
Solving complex reasoning tasks is a key real-world application of agents. Thanks to the pretraining of Large Language Models (LLMs) on code data, recent approaches like CodeAct successfully use code as LLM agents' action, achieving good results. However, CodeAct greedily generates the next action's code block by relying on fragmented thoughts, resulting in inconsistency and instability. Moreover, CodeAct lacks action-related ground-truth (GT), making its supervision signals and termination conditions questionable in multi-turn interactions. To address these issues, we first introduce a simple yet effective end-to-end code generation paradigm, CodeProgram, which leverages code's systematic logic to align with global reasoning and enable cohesive problem-solving. Then, we propose Tree-of-Code (ToC), which self-grows CodeProgram nodes based on the executable nature of the code and enables self-supervision in a GT-free scenario. Experimental results on two datasets using ten popular zero-shot LLMs show ToC remarkably boosts accuracy by nearly 20% over CodeAct with less than 1/4 turns. Several LLMs even perform better on one-turn CodeProgram than on multi-turn CodeAct. To further investigate the trade-off between efficacy and efficiency, we test different ToC tree sizes and exploration mechanisms. We also highlight the potential of ToC's end-to-end data generation for supervised and reinforced fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Improves code generation consistency in complex reasoning tasks
Addresses lack of action-related ground-truth in multi-turn interactions
Enhances accuracy and efficiency in end-to-end code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end code generation with systematic logic
Self-growing tree-structured code program nodes
Self-supervision in ground-truth-free scenarios
🔎 Similar Papers
No similar papers found.
Ziyi Ni
Ziyi Ni
Institute of Automation,Chinese Academy of Sciences
LLM agentcode agentlarge language modelmultimodal LLMtemporal modeling
Y
Yifan Li
Global Innovation Exchange Institution, Tsinghua University
N
Ning Yang
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Dou Shen
Dou Shen
Baidu Inc
Data MiningMachine LearningOnline Advertising
P
Pin Lv
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Daxiang Dong
Daxiang Dong
Baidu
Deep Learning、Natural Language Processing、Data Mining