AI-Generated Code Is Not Reproducible (Yet): An Empirical Study of Dependency Gaps in LLM-Based Coding Agents

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the executability of code generated by large language models (LLMs) in clean, minimal environments, revealing a substantial gap between declared dependencies and actual runtime requirements. We propose a novel three-layer dependency framework—comprising declared, available, and runtime dependencies—to systematically quantify dependency inconsistencies in LLM-based programming agents and assess cross-language reproducibility. Using standardized prompt sets across Python, JavaScript, and Java, we conduct automated dependency parsing and environment validation on Claude Code, Codex, and Gemini. Results show that only 68.3% of generated projects execute out-of-the-box; execution success rates are 89.2% for Python and 44.0% for Java, highlighting language-specific disparities. On average, dependency graphs inflate 13.5× relative to declared dependencies, exposing pervasive implicit dependency issues. This work provides critical empirical evidence and a methodological foundation for improving the reliability and engineering deployability of LLM-generated code.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Models (LLMs) as coding agents promises to accelerate software development, but their impact on generated code reproducibility remains largely unexplored. This paper presents an empirical study investigating whether LLM-generated code can be executed successfully in a clean environment with only OS packages and using only the dependencies that the model specifies. We evaluate three state-of-the-art LLM coding agents (Claude Code, OpenAI Codex, and Gemini) across 300 projects generated from 100 standardized prompts in Python, JavaScript, and Java. We introduce a three-layer dependency framework (distinguishing between claimed, working, and runtime dependencies) to quantify execution reproducibility. Our results show that only 68.3% of projects execute out-of-the-box, with substantial variation across languages (Python 89.2%, Java 44.0%). We also find a 13.5 times average expansion from declared to actual runtime dependencies, revealing significant hidden dependencies.
Problem

Research questions and friction points this paper is trying to address.

Investigates reproducibility of LLM-generated code in clean environments
Quantifies dependency gaps using a three-layer framework across languages
Reveals hidden dependencies causing execution failures in generated projects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces three-layer dependency framework for reproducibility
Evaluates three LLM coding agents across 300 projects
Quantifies hidden dependencies and execution success rates
B
Bhanu Prakash Vangala
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
A
Ali Adibifar
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
Tanu Malik
Tanu Malik
Associate Professor, University of Missouri, Columbia
Data Management SystemsData ProvenanceHPC systems
Ashish Gehani
Ashish Gehani
SRI
ProvenanceDebloatingSecurity