🤖 AI Summary
This work investigates the persistent value of syntax-aware representations in billion-parameter large language models (LLMs) for code generation. While syntactic errors have markedly declined in ultra-large LLMs, the utility of explicit syntactic information remains questionable. To address this, we propose GrammarCoder—a family of models that explicitly integrate programming language grammar via context-free grammar (CFG)-guided decoding, syntax-constrained token prediction, and a Transformer-adapted syntax embedding enhancement module. Our study provides the first empirical evidence that syntactic information continues to significantly improve semantic discrimination—not merely syntactic correctness—in ultra-large LLMs, effectively mitigating semantic errors induced by minor code perturbations. On HumanEval+ and MBPP+, GrammarCoder achieves substantial accuracy gains; syntax error rates approach zero, and semantic error rates decrease by 12.7%.
📝 Abstract
Grammar serves as a cornerstone in programming languages and software engineering, providing frameworks to define the syntactic space and program structure. Existing research demonstrates the effectiveness of grammar-based code representations in small-scale models, showing their ability to reduce syntax errors and enhance performance. However, as language models scale to the billion level or beyond, syntax-level errors become rare, making it unclear whether grammar information still provides performance benefits. To explore this, we develop a series of billion-scale GrammarCoder models, incorporating grammar rules in the code generation process. Experiments on HumanEval (+) and MBPP (+) demonstrate a notable improvement in code generation accuracy. Further analysis shows that grammar-based representations enhance LLMs' ability to discern subtle code differences, reducing semantic errors caused by minor variations. These findings suggest that grammar-based code representations remain valuable even in billion-scale models, not only by maintaining syntax correctness but also by improving semantic differentiation.