SpiritSight Agent: Advanced GUI Agent with One Look

πŸ“… 2025-03-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current vision-based GUI agents suffer from significant bottlenecks in element localization accuracy, hindering real-world deployment. This paper proposes an end-to-end vision-driven GUI agent targeting imprecise element localization and input ambiguity under high-resolution cross-platform interfaces. Our approach introduces three key contributions: (1) GUI-Lasagneβ€”the first hierarchical, structured GUI dataset supporting fine-grained spatial reasoning; (2) Universal Block Parsing (UBP), a novel algorithm enabling pixel-accurate element grounding and dynamic resolution-robust modeling; and (3) joint optimization of state-of-the-art vision-language models with large-scale synthetic GUI data. Evaluated across multi-platform benchmarks, our method achieves a 23.6% average improvement in localization accuracy while maintaining low latency (<380 ms) and strong cross-platform compatibility. To our knowledge, this is the first work to unify high-precision localization, low-latency inference, and robust cross-platform navigation in a single GUI agent framework.

Technology Category

Application Category

πŸ“ Abstract
Graphical User Interface (GUI) agents show amazing abilities in assisting human-computer interaction, automating human user's navigation on digital devices. An ideal GUI agent is expected to achieve high accuracy, low latency, and compatibility for different GUI platforms. Recent vision-based approaches have shown promise by leveraging advanced Vision Language Models (VLMs). While they generally meet the requirements of compatibility and low latency, these vision-based GUI agents tend to have low accuracy due to their limitations in element grounding. To address this issue, we propose $ extbf{SpiritSight}$, a vision-based, end-to-end GUI agent that excels in GUI navigation tasks across various GUI platforms. First, we create a multi-level, large-scale, high-quality GUI dataset called $ extbf{GUI-Lasagne}$ using scalable methods, empowering SpiritSight with robust GUI understanding and grounding capabilities. Second, we introduce the $ extbf{Universal Block Parsing (UBP)}$ method to resolve the ambiguity problem in dynamic high-resolution of visual inputs, further enhancing SpiritSight's ability to ground GUI objects. Through these efforts, SpiritSight agent outperforms other advanced methods on diverse GUI benchmarks, demonstrating its superior capability and compatibility in GUI navigation tasks. Models are available at $href{https://huggingface.co/SenseLLM/SpiritSight-Agent-8B}{this URL}$.
Problem

Research questions and friction points this paper is trying to address.

Improves accuracy of vision-based GUI agents
Enhances GUI object grounding in dynamic environments
Ensures compatibility across diverse GUI platforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-based end-to-end GUI agent
Multi-level large-scale GUI dataset
Universal Block Parsing for ambiguity resolution
πŸ”Ž Similar Papers
No similar papers found.
Z
Zhiyuan Huang
SenseTime Research
Ziming Cheng
Ziming Cheng
National University of Singapore, BUPT, SenseTime
Multimodel-LLMWeb Agent3D Human Pose Estimation
J
Junting Pan
MMLab, CUHK
Z
Zhaohui Hou
SenseTime Research
M
Mingjie Zhan
SenseTime Research