π€ AI Summary
This work addresses the limitations of current mobile multimodal large language models, which are largely confined to passive responses and lack both proactive perception of usersβ implicit intentions and the ability to generate executable actions, as well as a systematic evaluation benchmark. We propose the first comprehensive benchmark for proactive intelligence in mobile scenarios, inferring user intent through four-dimensional contextual signals and generating executable function sequences spanning 14 real-world categories, 63 APIs, and over 3,660 instances. An evaluation framework supporting multi-reference annotations and expert review is introduced, alongside a fine-tuned model based on Qwen2.5-VL-7B-Instruct. Experimental results demonstrate that our approach achieves a success rate of 19.15%, significantly outperforming o1 (15.71%) and GPT-5 (7.39%), thereby validating both the learnability of proactive intelligence and the effectiveness of the proposed benchmark.
π Abstract
Multimodal large language models (MLLMs) have made significant progress in mobile agent development, yet their capabilities are predominantly confined to a reactive paradigm, where they merely execute explicit user commands. The emerging paradigm of proactive intelligence, where agents autonomously anticipate needs and initiate actions, represents the next frontier for mobile agents. However, its development is critically bottlenecked by the lack of benchmarks that can address real-world complexity and enable objective, executable evaluation. To overcome these challenges, we introduce ProactiveMobile, a comprehensive benchmark designed to systematically advance research in this domain. ProactiveMobile formalizes the proactive task as inferring latent user intent across four dimensions of on-device contextual signals and generating an executable function sequence from a comprehensive function pool of 63 APIs. The benchmark features over 3,660 instances of 14 scenarios that embrace real-world complexity through multi-answer annotations. To ensure quality, a team of 30 experts conducts a final audit of the benchmark, verifying factual accuracy, logical consistency, and action feasibility, and correcting any non-compliant entries. Extensive experiments demonstrate that our fine-tuned Qwen2.5-VL-7B-Instruct achieves a success rate of 19.15%, outperforming o1 (15.71%) and GPT-5 (7.39%). This result indicates that proactivity is a critical competency widely lacking in current MLLMs, yet it is learnable, emphasizing the importance of the proposed benchmark for proactivity evaluation.