🤖 AI Summary
This work investigates the bidirectional enhancement mechanism between code generation and complex reasoning in large language models (LLMs). Addressing the mutual constraints between code intelligence and advanced reasoning capabilities, we propose a synergistic paradigm wherein “code serves as a reasoning medium” and “reasoning acts as a code engine,” establishing a unified analytical framework that formalizes bidirectional empowerment pathways and evaluation dimensions. Methodologically, we integrate program analysis, chain-of-thought prompting, execution-guided learning, symbolic reasoning, and runtime verification to enable multi-granularity joint modeling of code and reasoning. We introduce three novel techniques: scalable co-training, dynamic code distillation, and reasoning-guided fine-tuning. Experiments demonstrate that our framework significantly improves accuracy and robustness on complex software engineering tasks—advancing code intelligence from snippet-level completion toward end-to-end, engineering-grade problem solving.
📝 Abstract
In large language models (LLMs), code and reasoning reinforce each other: code offers an abstract, modular, and logic-driven structure that supports reasoning, while reasoning translates high-level goals into smaller, executable steps that drive more advanced code intelligence. In this study, we examine how code serves as a structured medium for enhancing reasoning: it provides verifiable execution paths, enforces logical decomposition, and enables runtime validation. We also explore how improvements in reasoning have transformed code intelligence from basic completion to advanced capabilities, enabling models to address complex software engineering tasks through planning and debugging. Finally, we identify key challenges and propose future research directions to strengthen this synergy, ultimately improving LLM's performance in both areas.