🤖 AI Summary
Current mobile UI automation agents lack proactive and contextually appropriate user interaction capabilities. Method: This work formally defines the “agent-initiated interaction” task framework, introducing two core dimensions: interaction timing determination and autonomous boundary delineation. We construct AndroidInteraction—the first benchmark dataset dedicated to this problem—and design a dual-input evaluation paradigm combining multimodal inputs (screenshots + OCR) and pure text to systematically assess mainstream LLMs on interaction detection and message generation. Results: Experiments reveal significant performance gaps in current LLMs (average F1 < 0.4), confirming the task’s inherent difficulty. To foster reproducible research, we open-source our annotation guidelines, implementable baselines, and analytical tools—laying foundational groundwork for trustworthy and personalized UI agent development.
📝 Abstract
Phone automation agents aim to autonomously perform a given natural-language user request, such as scheduling appointments or booking a hotel. While much research effort has been devoted to screen understanding and action planning, complex tasks often necessitate user interaction for successful completion. Aligning the agent with the user's expectations is crucial for building trust and enabling personalized experiences. This requires the agent to proactively engage the user when necessary, avoiding actions that violate their preferences while refraining from unnecessary questions where a default action is expected. We argue that such subtle agent-initiated interaction with the user deserves focused research attention. To promote such research, this paper introduces a task formulation for detecting the need for user interaction and generating appropriate messages. We thoroughly define the task, including aspects like interaction timing and the scope of the agent's autonomy. Using this definition, we derived annotation guidelines and created AndroidInteraction, a diverse dataset for the task, leveraging an existing UI automation dataset. We tested several text-based and multimodal baseline models for the task, finding that it is very challenging for current LLMs. We suggest that our task formulation, dataset, baseline models and analysis will be valuable for future UI automation research, specifically in addressing this crucial yet often overlooked aspect of agent-initiated interaction. This work provides a needed foundation to allow personalized agents to properly engage the user when needed, within the context of phone UI automation.