🤖 AI Summary
This work addresses the limitation of existing GUI agents, which primarily focus on action automation and struggle to understand user intent or provide collaborative assistance in open-ended tasks. To bridge this gap, we introduce GUIDE, the first benchmark for evaluating user intent understanding and collaborative assistance in open-ended GUI scenarios. Built upon 67.5 hours of multimodal screen recordings with think-aloud protocols across ten diverse software applications, GUIDE defines three core tasks: behavioral state detection, intent prediction, and help prediction, leveraging visual frames, interaction sequences, and spoken language. Experiments on eight state-of-the-art multimodal models reveal that current approaches achieve only 44.6% and 55.0% accuracy on behavioral state and help prediction, respectively. Notably, incorporating user context improves help prediction performance by up to 50.2 percentage points, underscoring the critical importance of user-centered, structured understanding in intelligent GUI assistance.
📝 Abstract
Graphical User Interface (GUI) agents have the potential to assist users in interacting with complex software (e.g., PowerPoint, Photoshop). While prior research has primarily focused on automating user actions through clicks and keystrokes, this paradigm overlooks human intention, where users value the ability to explore, iterate, and refine their ideas while maintaining agency. To move beyond automation and toward collaboration, GUI agents must understand what users are doing and why. We introduce GUIDE (GUI User Intent Detection Evaluation), a benchmark that evaluates AI models on their ability to perceive user behavior, infer intent, and provide assistance in open-ended GUI tasks. GUIDE consists of 67.5 hours of screen recordings from 120 novice user demonstrations with think-aloud narrations, across 10 software. GUIDE defines three tasks - (i) Behavior State Detection, (ii) Intent Prediction, and (iii) Help Prediction that test a model's ability to recognize behavior state, reason about goals, and decide when and how to help. Evaluations across eight state-of-the-art multimodal models reveal that all models struggled, achieving only 44.6% and 55.0% accuracy on behavior state and help prediction. However, providing user context significantly improved the performance, raising help prediction by up to 50.2pp, highlighting the critical role of structured user understanding in effective assistance. Our dataset is available at https://guide-bench.github.io.