🤖 AI Summary
Repetition in code generation by large language models (LLMs) occurs across multiple granularities—character-, statement-, and block-level—and remains a pervasive, under-studied issue. Method: This work presents the first systematic empirical study of multi-granularity repetition across 19 state-of-the-art code LLMs and introduces DeRep, a lightweight, interpretable, production-ready rule-based deduplication method. DeRep integrates syntactic structure analysis, semantic similarity assessment, sliding-window pattern detection, and multi-level redundancy filtering. Contribution/Results: DeRep reduces repetition rates by over 90% on rep-3, rep-line, and sim-line metrics, while improving Pass@1 by 208.3%. When integrated with existing deduplication techniques, it further boosts Pass@1 by 53.7%–215.7%, significantly enhancing both conciseness and functional correctness of generated code. The study also establishes the first taxonomy covering 20 distinct repetition patterns.
📝 Abstract
Despite recent advances in Large Language Models (LLMs) for code generation, the quality of LLM-generated code still faces significant challenges. One significant issue is code repetition, which refers to the model's tendency to generate structurally redundant code, resulting in inefficiencies and reduced readability. To address this, we conduct the first empirical study to investigate the prevalence and nature of repetition across 19 state-of-the-art code LLMs using three widely-used benchmarks. Our study includes both quantitative and qualitative analyses, revealing that repetition is pervasive and manifests at various granularities and extents, including character, statement, and block levels. We further summarize a taxonomy of 20 repetition patterns. Building on our findings, we propose DeRep, a rule-based technique designed to detect and mitigate repetition in generated code. We evaluate DeRep using both open-source benchmarks and in an industrial setting. Our results demonstrate that DeRep significantly outperforms baselines in reducing repetition (with an average improvements of 91.3%, 93.5%, and 79.9% in rep-3, rep-line, and sim-line metrics) and enhancing code quality (with a Pass@1 increase of 208.3% over greedy search). Furthermore, integrating DeRep improves the performance of existing repetition mitigation methods, with Pass@1 improvements ranging from 53.7% to 215.7%.