🤖 AI Summary
This survey addresses core challenges in GUI agent research—namely, graphical interface understanding, mapping natural language instructions to executable interaction actions, and robust execution in real-world scenarios—using large language models (LLMs) and multimodal LLMs (MLLMs). We propose the first unified GUI agent framework comprising visual encoding, GUI structure parsing, multi-stage prompt engineering, and action planning modules. Additionally, we introduce a fine-grained taxonomy that synthesizes over 100 state-of-the-art works, mainstream datasets, benchmark suites, and industrial practices. This constitutes the first comprehensive, panoramic review of GUI agent research, explicitly identifying critical bottlenecks: limited generalization, poor cross-application transferability, and low action reliability. We further outline promising future directions, including lightweight adaptation, embodied interaction modeling, and closed-loop evaluation. Our work significantly advances the practical deployment and maturation of GUI agents.
📝 Abstract
Recent advances in foundation models, particularly Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs), have facilitated the development of intelligent agents capable of performing complex tasks. By leveraging the ability of (M)LLMs to process and interpret Graphical User Interfaces (GUIs), these agents can autonomously execute user instructions, simulating human-like interactions such as clicking and typing. This survey consolidates recent research on (M)LLM-based GUI agents, highlighting key innovations in data resources, frameworks, and applications. We begin by reviewing representative datasets and benchmarks, followed by an overview of a generalized, unified framework that encapsulates the essential components of prior studies, supported by a detailed taxonomy. Additionally, we explore relevant commercial applications. Drawing insights from existing work, we identify key challenges and propose future research directions. We hope this survey will inspire further advancements in the field of (M)LLM-based GUI agents.