Correctness isnt Efficiency: Runtime Memory Divergence in LLM-Generated Code

📅 2026-01-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical yet overlooked issue that code generated by large language models (LLMs), while often passing unit tests, exhibits substantial variability in runtime memory usage and performance, posing potential operational risks. The authors propose the first evaluation framework specifically designed to assess the runtime stability of LLM-generated code. Their approach innovatively introduces the Dynamic Mean Pairwise Distance (DMPD) and Model Instability Score (MIS), transforming memory traces into monotonic peak profiles to quantify stability. This is complemented by a comprehensive analysis integrating software engineering metrics such as Dynamic Time Warping (DTW) alignment and cognitive complexity. Experiments on BigOBench and CodeContests reveal significant runtime divergence among functionally correct solutions, demonstrate that higher sampling temperatures exacerbate instability, and establish a strong correlation between runtime stability and code maintainability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can generate programs that pass unit tests, but passing tests does not guarantee reliable runtime behavior. We find that different correct solutions to the same task can show very different memory and performance patterns, which can lead to hidden operational risks. We present a framework to measure execution-time memory stability across multiple correct generations. At the solution level, we introduce Dynamic Mean Pairwise Distance (DMPD), which uses Dynamic Time Warping to compare the shapes of memory-usage traces after converting them into Monotonic Peak Profiles (MPPs) to reduce transient noise. Aggregating DMPD across tasks yields a model-level Model Instability Score (MIS). Experiments on BigOBench and CodeContests show substantial runtime divergence among correct solutions. Instability often increases with higher sampling temperature even when pass@1 improves. We also observe correlations between our stability measures and software engineering indicators such as cognitive and cyclomatic complexity, suggesting links between operational behavior and maintainability. Our results support stability-aware selection among passing candidates in CI/CD to reduce operational risk without sacrificing correctness. Artifacts are available.
Problem

Research questions and friction points this paper is trying to address.

runtime memory divergence
code generation
operational risk
memory stability
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Mean Pairwise Distance
Monotonic Peak Profiles
Model Instability Score
Runtime Memory Divergence
Large Language Models
🔎 Similar Papers
No similar papers found.
P
Prateek Rajput
University Luxembourg, Esch-sur-Alzette, Luxembourg
Yewei Song
Yewei Song
Ph.D. Candidate, University of Luxembourg
natural language processingsoftware engineering
A
Abdoul Aziz Bonkoungou
University of Luxembourg, Esch-sur-Alzette, Luxembourg
I
Iyiola E. Olatunji
University of Luxembourg, Esch-sur-Alzette, Luxembourg
A
A. Kaboré
University of Luxembourg, Esch-sur-Alzette, Luxembourg
Jacques Klein
Jacques Klein
University of Luxembourg / SnT
Computer ScienceSoftware EngineeringAndroid SecuritySoftware SecurityModel-Driven Engineering
Tegawendé F. Bissyandé
Tegawendé F. Bissyandé
Chief Scientist II / ERC Fellow / TruX @SnT, University of Luxembourg
Software SecurityProgram RepairCode SearchMachine LearningBig Code