🤖 AI Summary
Current large language models (LLMs) lack dynamic synergy between textual reasoning and code generation, resulting in underutilized symbolic computation capabilities. To address this, we propose a *code-text bidirectional guidance* paradigm for symbolic enhancement, introducing the first joint training framework integrating multi-round supervised fine-tuning (SFT) and direct preference optimization (DPO). We construct SymBench—the first benchmark for symbolic tasks with tunable complexity—and design a dual verification mechanism comprising a symbolic validator and self-answer consistency checking. Trained on 12K multi-round guided trajectories and 5.5K preference pairs, our method boosts GPT-4o’s average score on SymBench from 53.3 to 86.4, surpassing o1 (82.7), o1-preview (74.8), and DeepSeek R1 (76.8). Furthermore, cross-model generalization yields an average improvement of 41.8 points for Claude, Mistral, and GPT-3.5.
📝 Abstract
Existing methods fail to effectively steer Large Language Models (LLMs) between textual reasoning and code generation, leaving symbolic computing capabilities underutilized. We introduce CodeSteer, an effective method for guiding LLM code/text generation. We construct a comprehensive benchmark SymBench comprising 37 symbolic tasks with adjustable complexity and also synthesize datasets of 12k multi-round guidance/generation trajectories and 5.5k guidance comparison pairs. We fine-tune the Llama-3-8B model with a newly designed multi-round supervised fine-tuning (SFT) and direct preference optimization (DPO). The resulting model, CodeSteerLLM, augmented with the proposed symbolic and self-answer checkers, effectively guides the code/text generation of larger models. Augmenting GPT-4o with CodeSteer raises its average performance score from 53.3 to 86.4, even outperforming the existing best LLM OpenAI o1 (82.7), o1-preview (74.8), and DeepSeek R1 (76.8) across all 37 tasks (28 seen, 9 unseen). Trained for GPT-4o, CodeSteer demonstrates superior generalizability, providing an average 41.8 performance boost on Claude, Mistral, and GPT-3.5. CodeSteer-guided LLMs fully harness symbolic computing to maintain strong performance on highly complex tasks. Models, Datasets, and Codes are available at https://github.com/yongchao98/CodeSteer-v1.0.