CAAP: Context-Aware Action Planning Prompting to Solve Computer Tasks with Front-End UI Only

📅 2024-06-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalizability of robotic process automation (RPA) systems, which typically rely on HTML/DOM parsing or domain-specific APIs for GUI interaction. We propose a purely vision-driven, end-to-end agent framework that operates solely on raw screen screenshots. Our method integrates multimodal OCR with large language models (LLMs) for joint reasoning, and introduces two key components: (1) a GUI action space constraint module that grounds actions in visual affordances, and (2) a context-aware action planning (CAAP) prompting strategy for robust, cross-application, cross-platform keyboard and mouse-level control. Crucially, the approach requires no access to source code, backend APIs, human demonstration data, or environment-coupled interfaces. Evaluated on MiniWoB++ and WebShop benchmarks, it achieves 94.5% task success rate and 62.3 points, respectively—significantly outperforming all image-only baselines. The code and models are publicly released.

Technology Category

Application Category

📝 Abstract
Software robots have long been used in Robotic Process Automation (RPA) to automate mundane and repetitive computer tasks. With the advent of Large Language Models (LLMs) and their advanced reasoning capabilities, these agents are now able to handle more complex or previously unseen tasks. However, LLM-based automation techniques in recent literature frequently rely on HTML source code for input or application-specific API calls for actions, limiting their applicability to specific environments. We propose an LLM-based agent that mimics human behavior in solving computer tasks. It perceives its environment solely through screenshot images, which are then converted into text for an LLM to process. By leveraging the reasoning capability of the LLM, we eliminate the need for large-scale human demonstration data typically required for model training. The agent only executes keyboard and mouse operations on Graphical User Interface (GUI), removing the need for pre-provided APIs to function. To further enhance the agent's performance in this setting, we propose a novel prompting strategy called Context-Aware Action Planning (CAAP) prompting, which enables the agent to thoroughly examine the task context from multiple perspectives. Our agent achieves an average success rate of 94.5% on MiniWoB++ and an average task score of 62.3 on WebShop, outperforming all previous studies of agents that rely solely on screen images. This method demonstrates potential for broader applications, particularly for tasks requiring coordination across multiple applications on desktops or smartphones, marking a significant advancement in the field of automation agents. Codes and models are accessible at https://github.com/caap-agent/caap-agent.
Problem

Research questions and friction points this paper is trying to address.

Graphical User Interface Understanding
Robot Process Automation
Large Language Model Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

CAAP
Context-aware Action Planning
Robotic Process Automation
J
Junhee Cho
Samsung SDS
J
Jihoon Kim
Samsung SDS
D
Daseul Bae
Samsung SDS
J
Jinho Choo
Samsung SDS
Youngjune Gwon
Youngjune Gwon
Executive Vice President, Samsung SDS
Artificial IntelligenceMachine LearningNetworkingWirelessSecurity
Y
Yeong-Dae Kwon
Samsung SDS