Agent-Initiated Interaction in Phone UI Automation

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mobile UI automation agents lack proactive and contextually appropriate user interaction capabilities. Method: This work formally defines the “agent-initiated interaction” task framework, introducing two core dimensions: interaction timing determination and autonomous boundary delineation. We construct AndroidInteraction—the first benchmark dataset dedicated to this problem—and design a dual-input evaluation paradigm combining multimodal inputs (screenshots + OCR) and pure text to systematically assess mainstream LLMs on interaction detection and message generation. Results: Experiments reveal significant performance gaps in current LLMs (average F1 < 0.4), confirming the task’s inherent difficulty. To foster reproducible research, we open-source our annotation guidelines, implementable baselines, and analytical tools—laying foundational groundwork for trustworthy and personalized UI agent development.

Technology Category

Application Category

📝 Abstract
Phone automation agents aim to autonomously perform a given natural-language user request, such as scheduling appointments or booking a hotel. While much research effort has been devoted to screen understanding and action planning, complex tasks often necessitate user interaction for successful completion. Aligning the agent with the user's expectations is crucial for building trust and enabling personalized experiences. This requires the agent to proactively engage the user when necessary, avoiding actions that violate their preferences while refraining from unnecessary questions where a default action is expected. We argue that such subtle agent-initiated interaction with the user deserves focused research attention. To promote such research, this paper introduces a task formulation for detecting the need for user interaction and generating appropriate messages. We thoroughly define the task, including aspects like interaction timing and the scope of the agent's autonomy. Using this definition, we derived annotation guidelines and created AndroidInteraction, a diverse dataset for the task, leveraging an existing UI automation dataset. We tested several text-based and multimodal baseline models for the task, finding that it is very challenging for current LLMs. We suggest that our task formulation, dataset, baseline models and analysis will be valuable for future UI automation research, specifically in addressing this crucial yet often overlooked aspect of agent-initiated interaction. This work provides a needed foundation to allow personalized agents to properly engage the user when needed, within the context of phone UI automation.
Problem

Research questions and friction points this paper is trying to address.

Detecting when phone automation agents need user interaction
Generating appropriate messages for agent-initiated user engagement
Balancing agent autonomy with user preferences in UI automation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proactive user interaction detection in UI automation
Multimodal models for agent-user message generation
AndroidInteraction dataset for training automation agents
🔎 Similar Papers
No similar papers found.
N
Noam Kahlon
Google Research, Mountain View, CA, USA
Guy Rom
Guy Rom
University of Oxford, Google
Computer SciencePhysics
A
Anatoly Efros
Google Research, Mountain View, CA, USA
F
Filippo Galgani
Google Research, Mountain View, CA, USA
Omri Berkovitch
Omri Berkovitch
Google Research
NLP
S
Sapir Caduri
Google Research, Mountain View, CA, USA
W
William E. Bishop
Google Research, Mountain View, CA, USA
Oriana Riva
Oriana Riva
Google Research
NLPAIMobile systems
Ido Dagan
Ido Dagan
Professor, Computer Science Department, Bar-Ilan University
Natural Language ProcessingMachine Learning