Failure-Aware Enhancements for Large Language Model (LLM) Code Generation: An Empirical Study on Decision Framework

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent and diverse failure modes of large language models (LLMs) in code generation and the absence of effective mechanisms for selecting appropriate enhancement strategies tailored to specific failure types. Through empirical analysis of LLM-generated code failures across 25 GitHub projects, the work systematically evaluates the efficacy of self-reflection, multi-model collaboration, and retrieval-augmented generation (RAG) in addressing four representative failure categories. It further introduces, for the first time, a decision framework that matches optimal repair strategies to failure characteristics. Experimental results demonstrate that RAG consistently outperforms other approaches across all failure types, while self-reflection is only effective for logically incorrect code amenable to static analysis. The proposed framework significantly improves strategy selection accuracy, offering a data-driven foundation for practical deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) show promise for automating software development by translating requirements into code. However, even advanced prompting workflows like progressive prompting often leave some requirements unmet. Although methods such as self-critique, multi-model collaboration, and retrieval-augmented generation (RAG) have been proposed to address these gaps, developers lack clear guidance on when to use each. In an empirical study of 25 GitHub projects, we found that progressive prompting achieves 96.9% average task completion, significantly outperforming direct prompting (80.5%, Cohen's d=1.63, p<0.001) but still leaving 8 projects incomplete. For 6 of the most representative projects, we evaluated each enhancement strategy across 4 failure types. Our results reveal that method effectiveness depends critically on failure characteristics: Self-Critique succeeds on code-reviewable logic errors but fails completely on external service integration (0% improvement), while RAG achieves highest completion across all failure types with superior efficiency. Based on these findings, we propose a decision framework that maps each failure pattern to the most suitable enhancement method, giving practitioners practical, data-driven guidance instead of trial-and-error.
Problem

Research questions and friction points this paper is trying to address.

LLM code generation
failure types
enhancement strategies
task completion
decision framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

failure-aware enhancement
decision framework
retrieval-augmented generation
self-critique
empirical study
🔎 Similar Papers
No similar papers found.
J
Jianru Shen
University of Montana, Missoula, MT, USA
Zedong Peng
Zedong Peng
MIT
Operations ResearchOptimizationMixed-Integer ProgrammingProcess System Engineering
L
Lucy Owen
University of Montana, Missoula, MT, USA