🤖 AI Summary
This study addresses the persistent and diverse failure modes of large language models (LLMs) in code generation and the absence of effective mechanisms for selecting appropriate enhancement strategies tailored to specific failure types. Through empirical analysis of LLM-generated code failures across 25 GitHub projects, the work systematically evaluates the efficacy of self-reflection, multi-model collaboration, and retrieval-augmented generation (RAG) in addressing four representative failure categories. It further introduces, for the first time, a decision framework that matches optimal repair strategies to failure characteristics. Experimental results demonstrate that RAG consistently outperforms other approaches across all failure types, while self-reflection is only effective for logically incorrect code amenable to static analysis. The proposed framework significantly improves strategy selection accuracy, offering a data-driven foundation for practical deployment.
📝 Abstract
Large language models (LLMs) show promise for automating software development by translating requirements into code. However, even advanced prompting workflows like progressive prompting often leave some requirements unmet. Although methods such as self-critique, multi-model collaboration, and retrieval-augmented generation (RAG) have been proposed to address these gaps, developers lack clear guidance on when to use each. In an empirical study of 25 GitHub projects, we found that progressive prompting achieves 96.9% average task completion, significantly outperforming direct prompting (80.5%, Cohen's d=1.63, p<0.001) but still leaving 8 projects incomplete. For 6 of the most representative projects, we evaluated each enhancement strategy across 4 failure types. Our results reveal that method effectiveness depends critically on failure characteristics: Self-Critique succeeds on code-reviewable logic errors but fails completely on external service integration (0% improvement), while RAG achieves highest completion across all failure types with superior efficiency. Based on these findings, we propose a decision framework that maps each failure pattern to the most suitable enhancement method, giving practitioners practical, data-driven guidance instead of trial-and-error.