🤖 AI Summary
Current code-generating agents in test-driven development lack a theoretical understanding of their environment interaction strategies. This work proposes a probabilistic framework that formalizes the two dominant paradigms—code selection and feedback-based generation—and, for the first time, models fuzzy functional similarity selection as an estimator with inductive bias. Furthermore, it interprets backprompting as a contextual approximation of Thompson sampling, enabling the derivation of an irreducible regret bound. Leveraging probabilistic modeling, functional equivalence and fuzzy similarity analysis, and in-context learning, we empirically validate our theoretical claims on BigCodeBenchHard, LeetCodeDataset, and QiskitHumanEvalSim. We also introduce QiskitHumanEvalSimX, an enhanced benchmark featuring refined task descriptions to better evaluate agent capabilities.
📝 Abstract
Coding agents are increasingly utilized in test-driven software development, yet the theoretical mechanisms behind their environment-interaction strategies remain underexplored. We provide a probabilistic framework for two dominant paradigms: code selection after generation using the execution environment, and code generation conditioned on environment feedback. First, we formalize several well-established selection heuristics as environment-aware estimators of code correctness. We theoretically prove that estimators based on fuzzy functional similarity add an inductive bias and strictly dominate estimators based on functional equivalence in terms of signal-to-noise ratio. Second, we frame backprompting as an in-context approximation of Thompson sampling. We derive a novel regret bound for reward functions with unobservable components, theoretically explaining why the effectiveness of backprompting is limited by the ambiguity of the informal task description (an irreducible regret). Using three state-of-the-art open weight models, we corroborate these findings across BigCodeBenchHard, LeetCodeDataset, and QiskitHumanEvalSim. Our formalization also suggests how to improve task descriptions effectively, leading to a new benchmark, QiskitHumanEvalSimX.