Comment Traps: How Defective Commented-out Code Augment Defects in AI-Assisted Code Generation

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Commented-out code (CO code) silently contaminates AI programming assistants (e.g., GitHub Copilot, Cursor), misleading code generation and inducing defects in up to 58.17% of outputs. Method: Through controlled variable experiments, multi-round prompt perturbations, and rigorous human annotation, we systematically investigate whether models merely copy CO code or actively infer and generalize its defective patterns—even when explicitly instructed to “ignore comments.” Contribution/Results: We demonstrate that models do not passively replicate CO code but instead internalize and propagate its flawed logic; explicit exclusion directives reduce defect rates by ≤21.84% only. This challenges the prevailing assumption of contextual robustness in AI coding assistants. We introduce the first empirical evaluation framework for CO-code contamination, providing both theoretical insights and practical warnings for enhancing the safety and reliability of AI-assisted programming.

Technology Category

Application Category

📝 Abstract
With the rapid development of large language models in code generation, AI-powered editors such as GitHub Copilot and Cursor are revolutionizing software development practices. At the same time, studies have identified potential defects in the generated code. Previous research has predominantly examined how code context influences the generation of defective code, often overlooking the impact of defects within commented-out code (CO code). AI coding assistants' interpretation of CO code in prompts affects the code they generate. This study evaluates how AI coding assistants, GitHub Copilot and Cursor, are influenced by defective CO code. The experimental results show that defective CO code in the context causes AI coding assistants to generate more defective code, reaching up to 58.17 percent. Our findings further demonstrate that the tools do not simply copy the defective code from the context. Instead, they actively reason to complete incomplete defect patterns and continue to produce defective code despite distractions such as incorrect indentation or tags. Even with explicit instructions to ignore the defective CO code, the reduction in defects does not exceed 21.84 percent. These findings underscore the need for improved robustness and security measures in AI coding assistants.
Problem

Research questions and friction points this paper is trying to address.

Investigates how defective commented-out code influences AI-generated code defects
Evaluates AI assistants' reasoning with incomplete defect patterns in prompts
Highlights need for improved robustness in AI coding assistants
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI coding assistants generate defective code from commented-out code
Tools reason to complete incomplete defect patterns despite distractions
Explicit instructions to ignore defective code reduce defects minimally
🔎 Similar Papers
No similar papers found.