Token-Aware Coding Flow: A Study with Nano Surge in Reasoning Model

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from token bloat during chain-of-thought (CoT)-driven code generation and optimization, particularly when complex code smells are present. Method: We propose Token-Aware Coding Flow (TACF), the first framework to jointly model code smell detection, structure-preserving automated refactoring, and context-aware prompt engineering—enabling token-aware optimization at the reasoning-flow level. TACF employs role-constrained prompting, structured prompt design, and lightweight, semantics-preserving refactoring. Contribution/Results: Under semantic equivalence guarantees, TACF reduces raw code token consumption by 50% on average; with integrated prompt optimization, total token usage decreases by 74.5%–80%. This yields substantial improvements in inference efficiency and generated code quality.

Technology Category

Application Category

📝 Abstract
With the widespread application of large-scale language models (LLMs) in software engineering, the Chain of Thought (CoT) approach has emerged as a crucial tool for driving automated code generation and optimization. However, despite the significant success of CoT methods in generating high-quality code, the issue of token inflation during the reasoning process remains a formidable challenge to model performance and efficiency, particularly when dealing with complex code smells. Code smells not only affect the maintainability and scalability of code but also significantly increase the computational burden during LLM inference, leading to excessive token consumption and, consequently, reduced reasoning efficiency. This paper introduces an innovative Token-Aware Coding Flow method, aimed at addressing the token inflation problem caused by smelly code in the CoT process. Through experimentation, we validate the synergistic effect of code refactoring and prompt engineering strategies, demonstrating that after eliminating code smells, token consumption during model inference is significantly reduced. The experimental results show that refactored code, while maintaining functional consistency, can reduce token consumption by up to 50%. Additionally, by explicitly prompting the type of code smells in the prompt and incorporating strategies such as context awareness and role constraints, we further optimize the reasoning process, achieving a 24.5% to 30% reduction in token consumption. These optimizations not only significantly enhance the model's reasoning efficiency and improve code generation quality but also provide new insights for addressing performance bottlenecks in complex code generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses token inflation in Chain of Thought code generation.
Reduces token consumption by eliminating code smells.
Optimizes reasoning efficiency via prompt engineering strategies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-Aware Coding Flow reduces token inflation
Code refactoring cuts token use by 50%
Prompt engineering lowers token consumption 30%
🔎 Similar Papers
No similar papers found.
Junwei Hu
Junwei Hu
Undergraduate Student of Software Engineering, Tongji University
Software EngineeringAI4SESE4AINLP
W
Weicheng Zheng
School of Computer Science and Technology, Tongji University, Shanghai, China
Y
Yan Liu
School of Computer Science and Technology, Tongji University, Shanghai, China
Y
Yihan Liu
School of Computer Science and Technology, Tongji University, Shanghai, China