🤖 AI Summary
Code formatting—such as indentation and line breaks—enhances human readability but may introduce redundancy and computational overhead for large language models (LLMs) in code completion tasks.
Method: This paper systematically investigates the impact of formatting elements on LLM performance and inference efficiency in code completion, proposing a lightweight bidirectional formatting removal strategy, validated on Fill-in-the-Middle (FIM) tasks.
Contribution/Results: Experiments across Java, Python, C++, and C#, using ten state-of-the-art LLMs, show that removing formatting incurs no statistically significant accuracy degradation while reducing average input token count by 24.5%, substantially lowering latency and inference cost. The approach is fully compatible with prompt engineering and fine-tuning, and its implementation integrates seamlessly into existing inference pipelines—preserving source-code readability for developers while improving model efficiency.
📝 Abstract
Source code is usually formatted with elements like indentation and newlines to improve readability for human developers. However, these visual aids do not seem to be beneficial for large language models (LLMs) in the same way since the code is processed as a linear sequence of tokens. Furthermore, these additional tokens can lead to increased computational costs and longer response times for LLMs. If such formatting elements are non-essential to LLMs, we can reduce such costs by removing them from the code. To figure out the role played by formatting elements, we conduct a comprehensive empirical study to evaluate the impact of code formatting on LLM performance and efficiency. Through large-scale experiments on Fill-in-the-Middle Code Completion tasks across four programming languages (Java, Python, C++, C#) and ten LLMs-including both commercial and open-source models-we systematically analyze token count and performance when formatting elements are removed. Key findings indicate that LLMs can maintain performance across formatted code and unformatted code, achieving an average input token reduction of 24.5% with negligible output token reductions. This makes code format removal a practical optimization strategy for improving LLM efficiency. Further exploration reveals that both prompting and fine-tuning LLMs can lead to significant reductions (up to 36.1%) in output code length without compromising correctness. To facilitate practical applications, we develop a bidirectional code transformation tool for format processing, which can be seamlessly integrated into existing LLM inference workflows, ensuring both human readability and LLM efficiency.