🤖 AI Summary
Existing UI navigation evaluation methods focus solely on task success/failure, lacking fine-grained, automated assessment of underlying sub-processes—such as goal comprehension, knowledge-based planning, visual grounding, and instruction following—and suffer from insufficient dataset and tool robustness. This paper introduces Sphinx, the first multi-dimensional, automated benchmark for mobile UI navigation. Sphinx innovatively integrates invariant validation, knowledge probing, vision-language alignment evaluation, instruction-following quantification, and multimodal behavioral analysis. Its evaluation suite enables fully automated, reproducible, cross-application testing. Experiments across eight large language and multimodal models under 13 configurations reveal that no model achieves end-to-end navigation success. Sphinx systematically exposes structural deficiencies across all core sub-capabilities, providing granular, actionable insights into current model limitations.
📝 Abstract
Navigating mobile User Interface (UI) applications using large language and vision models based on high-level goal instructions is emerging as an important research field with significant practical implications, such as digital assistants and automated UI testing. To evaluate the effectiveness of existing models in mobile UI navigation, benchmarks are required and widely used in the literature. Although multiple benchmarks have been recently established for evaluating functional correctness being judged as pass or fail, they fail to address the need for multi-dimensional evaluation of the entire UI navigation process. Furthermore, other exiting related datasets lack an automated and robust benchmarking suite, making the evaluation process labor-intensive and error-prone. To address these issues, in this paper, we propose a new benchmark named Sphinx for multi-dimensional evaluation of existing models in practical UI navigation. Sphinx provides a fully automated benchmarking suite that enables reproducibility across real-world mobile apps and employs reliable evaluators to assess model progress. In addition to functional correctness, Sphinx includes comprehensive toolkits for multi-dimensional evaluation, such as invariant-based verification, knowledge probing, and knowledge-augmented generation to evaluate model capabilities including goal understanding, knowledge and planning, grounding, and instruction following, ensuring a thorough assessment of each sub-process in mobile UI navigation. We benchmark 8 large language and multi-modal models with 13 different configurations on Sphinx. Evaluation results show that all these models struggle on Sphinx, and fail on all test generation tasks. Our further analysis of the multi-dimensional evaluation results underscores the current progress and highlights future research directions to improve a model's effectiveness for mobile UI navigation.