Dynamic Stability of LLM-Generated Code

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation evaluation overemphasizes functional correctness while neglecting algorithmic complexity differences (e.g., O(n²) vs. O(n log n) sorting), thereby obscuring performance costs. Method: We propose a dynamic stability assessment framework that introduces two novel metrics—Static and Dynamic Control-Trace Divergence (SCTD/DCTD)—and the Behavior Expressiveness Factor (BEF), quantifying structural and runtime behavioral disparities via opcode distribution analysis. Contribution/Results: Experiments on BigOBench and CodeContests reveal substantial algorithmic variation among functionally correct outputs from mainstream LLMs; increasing temperature improves pass rates but exacerbates instability, uncovering a “stability penalty” phenomenon. Our work advances stability-aware code generation and provides both theoretical foundations and empirical evidence for developing next-generation benchmarks.

Technology Category

Application Category

📝 Abstract
Current evaluations of LLMs for code generation emphasize functional correctness, overlooking the fact that functionally correct solutions can differ significantly in algorithmic complexity. For instance, an $(O(n^2))$ versus $(O(n log n))$ sorting algorithm may yield similar output but incur vastly different performance costs in production. This discrepancy reveals a critical limitation in current evaluation methods: they fail to capture the behavioral and performance diversity among correct solutions. To address this, we introduce a principled framework for evaluating the dynamic stability of generated code. We propose two metrics derived from opcode distributions: Static Canonical Trace Divergence (SCTD), which captures algorithmic structure diversity across generated solutions, and Dynamic Canonical Trace Divergence (DCTD), which quantifies runtime behavioral variance. Their ratio, the Behavioral Expression Factor (BEF), serves as a diagnostic signal: it indicates critical runtime instability when BEF $ll$ 1 and functional redundancy when BEF $gg$ 1. Empirical results on BigOBench and CodeContests show that state-of-the-art LLMs exhibit significant algorithmic variance even among functionally correct outputs. Notably, increasing sampling temperature improves pass@1 rates but degrades stability, revealing an unrecognized trade-off: searching for correct solutions in diverse output spaces introduces a"penalty of instability"between correctness and behavioral consistency. Our findings call for stability-aware objectives in code generation and new benchmarks with asymptotic test cases for robust, real-world LLM evaluation.
Problem

Research questions and friction points this paper is trying to address.

Current LLM code evaluations overlook algorithmic complexity differences in correct solutions
Functionally correct code can have vastly different performance costs in production
Existing methods fail to capture behavioral and performance diversity among correct outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed SCTD metric for algorithmic structure diversity
Introduced DCTD metric for runtime behavioral variance
Defined BEF ratio to diagnose runtime instability
🔎 Similar Papers
No similar papers found.
P
Prateek Rajput
University of Luxembourg
A
Abdoul Aziz Bonkoungou
University of Luxembourg
Yewei Song
Yewei Song
Ph.D. Candidate, University of Luxembourg
natural language processingsoftware engineering
A
A. Kaboré
University of Luxembourg
I
Iyiola E. Olatunji
University of Luxembourg
Jacques Klein
Jacques Klein
University of Luxembourg / SnT
Computer ScienceSoftware EngineeringAndroid SecuritySoftware SecurityModel-Driven Engineering
T
Tegewende Bissyande
University of Luxembourg