🤖 AI Summary
This work addresses the high latency inherent in traditional large language model (LLM)-based code generation, which follows a serial paradigm where generation and execution block each other. The paper formalizes, for the first time, a parallel mechanism between code generation and execution, introducing a three-stage pipeline architecture that concurrently handles code generation, executable fragment detection, and execution. Key innovations include AST-based chunking, dynamic batching, gated execution, and early error interruption. Theoretical analysis establishes an upper bound on latency speedup, and extensive experiments across four benchmarks, seven LLMs, and three execution environments demonstrate up to a 99.9% reduction in non-overlapping execution latency and up to a 55% decrease in end-to-end latency.
📝 Abstract
Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor idle during generation and the generator idle during execution, resulting in unnecessary end-to-end latency. We observe that, unlike human developers, LLMs produce code tokens sequentially without revision, making it possible to execute code as it is being generated. We formalize this parallel execution paradigm, modeling it as a three-stage pipeline of generation, detection, and execution, and derive closed-form latency bounds that characterize its speedup potential and operating regimes. We then present Eager, a concrete implementation featuring AST-based chunking, dynamic batching with gated execution, and early error interruption. We evaluate Eager across four benchmarks, seven LLMs, and three execution environments. Results show that Eager reduces the non-overlapped execution latency by up to 99.9% and the end-to-end latency by up to 55% across seven LLMs and four benchmarks.