🤖 AI Summary
GUI agents face a fundamental trade-off between efficient task execution and high-precision visual localization: existing multimodal large language models (MLLMs) suffer from low accuracy in fine-grained UI element identification and lack depth-adaptive reasoning capabilities. To address this, we propose a lightweight GUI agent (2.5B parameters) featuring a novel implicit slow–fast hybrid reasoning mechanism and a learnable perceptual control module, enabling dynamic mode switching and adaptive visual focus localization. Our approach integrates implicit chain-of-thought reasoning, dedicated perception tokens, and dual-path visual encoding—global (fast) and local (slow)—to jointly enhance localization accuracy and reasoning efficiency. Evaluated on multiple GUI benchmarks, the agent achieves state-of-the-art performance with significantly improved inference speed, demonstrating both high effectiveness and computational efficiency.
📝 Abstract
Multimodal Large Language Models (MLLMs) show strong potential for interpreting and interacting with complex, pixel-rich Graphical User Interface (GUI) environments. However, building agents that are both efficient for high-level tasks and precise for fine-grained interactions remains challenging. GUI agents must perform routine actions efficiently while also handling tasks that demand exact visual grounding, yet existing approaches struggle when accuracy depends on identifying specific interface elements. These MLLMs also remain large and cannot adapt their reasoning depth to the task at hand. In this work, we introduce iSHIFT: Implicit Slow-fast Hybrid Inference with Flexible Tokens, a lightweight agent that integrates latent thinking (implicit chain-of-thought) with a perception control module. iSHIFT enables an MLLM to switch between a slow mode, which leverages detailed visual grounding for high precision and a fast mode that uses global cues for efficiency. Special perception tokens guide attention to relevant screen regions, allowing the model to decide both how to reason and where to focus. Despite its compact 2.5B size, iSHIFT matches state-of-the-art performance on multiple benchmark datasets.