ActionEngine: From Reactive to Programmatic GUI Agents via State Machine Memory

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing GUI agents, which rely on frame-by-frame invocation of vision-language models for reactive interaction, resulting in high computational cost, latency, and a lack of persistent memory. To overcome these challenges, the authors propose ActionEngine, a novel training-free dual-agent framework that shifts the paradigm from reactive execution to programmatic planning. In this architecture, a crawler agent offline constructs and continuously updates a state-machine-based memory, while an executor agent leverages this memory to synthesize complete, executable Python programs online. Integrated with a visual relocalization repair mechanism, the system achieves a 95% success rate on Reddit tasks in WebArena, requiring only one LLM call on average—surpassing the strongest baseline by 29 percentage points in success rate, reducing cost by 11.8×, and cutting end-to-end latency by half.

Technology Category

Application Category

📝 Abstract
Existing Graphical User Interface (GUI) agents operate through step-by-step calls to vision language models--taking a screenshot, reasoning about the next action, executing it, then repeating on the new page--resulting in high costs and latency that scale with the number of reasoning steps, and limited accuracy due to no persistent memory of previously visited pages. We propose ActionEngine, a training-free framework that transitions from reactive execution to programmatic planning through a novel two-agent architecture: a Crawling Agent that constructs an updatable state-machine memory of the GUIs through offline exploration, and an Execution Agent that leverages this memory to synthesize complete, executable Python programs for online task execution. To ensure robustness against evolving interfaces, execution failures trigger a vision-based re-grounding fallback that repairs the failed action and updates the memory. This design drastically improves both efficiency and accuracy: on Reddit tasks from the WebArena benchmark, our agent achieves 95% task success with on average a single LLM call, compared to 66% for the strongest vision-only baseline, while reducing cost by 11.8x and end-to-end latency by 2x. Together, these components yield scalable and reliable GUI interaction by combining global programmatic planning, crawler-validated action templates, and node-level execution with localized validation and repair.
Problem

Research questions and friction points this paper is trying to address.

GUI agents
persistent memory
reasoning steps
task accuracy
latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

programmatic GUI agents
state machine memory
two-agent architecture
vision-based re-grounding
training-free framework
🔎 Similar Papers