🤖 AI Summary
This survey systematically reviews advances in GUI agents powered by LLMs and multimodal LLMs (MLLMs), focusing on four core capabilities—perception, exploration, planning, and interaction—in real-world interface environments. Key challenges include inaccurate UI element localization, inefficient knowledge retrieval, weak long-horizon task planning, and lack of safe execution. To address these, we propose the first four-layer decoupled architecture, elucidating the synergy between multimodal understanding and long-range reasoning. We identify fundamental deficiencies in prevailing benchmarks—namely, insufficient scenario coverage, low task authenticity, and absence of safety evaluation—and introduce a standardized benchmarking methodology. Furthermore, we construct the first cross-platform technical landscape spanning desktop, mobile, and web interfaces, integrating key techniques: ViT/CLIP-based visual encoding, LLM-driven instruction parsing, knowledge graph augmentation, chain-of-thought and tree-search hybrid planning, and safety-constrained action generation.
📝 Abstract
Graphical User Interface (GUI) Agents have emerged as a transformative paradigm in human-computer interaction, evolving from rule-based automation scripts to sophisticated AI-driven systems capable of understanding and executing complex interface operations. This survey provides a comprehensive examination of the rapidly advancing field of LLM-based GUI Agents, systematically analyzing their architectural foundations, technical components, and evaluation methodologies. We identify and analyze four fundamental components that constitute modern GUI Agents: (1) perception systems that integrate text-based parsing with multimodal understanding for comprehensive interface comprehension; (2) exploration mechanisms that construct and maintain knowledge bases through internal modeling, historical experience, and external information retrieval; (3) planning frameworks that leverage advanced reasoning methodologies for task decomposition and execution; and (4) interaction systems that manage action generation with robust safety controls. Through rigorous analysis of these components, we reveal how recent advances in large language models and multimodal learning have revolutionized GUI automation across desktop, mobile, and web platforms. We critically examine current evaluation frameworks, highlighting methodological limitations in existing benchmarks while proposing directions for standardization. This survey also identifies key technical challenges, including accurate element localization, effective knowledge retrieval, long-horizon planning, and safety-aware execution control, while outlining promising research directions for enhancing GUI Agents' capabilities. Our systematic review provides researchers and practitioners with a thorough understanding of the field's current state and offers insights into future developments in intelligent interface automation.