iSHIFT: Lightweight Slow-Fast GUI Agent with Adaptive Perception

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GUI agents face a fundamental trade-off between efficient task execution and high-precision visual localization: existing multimodal large language models (MLLMs) suffer from low accuracy in fine-grained UI element identification and lack depth-adaptive reasoning capabilities. To address this, we propose a lightweight GUI agent (2.5B parameters) featuring a novel implicit slow–fast hybrid reasoning mechanism and a learnable perceptual control module, enabling dynamic mode switching and adaptive visual focus localization. Our approach integrates implicit chain-of-thought reasoning, dedicated perception tokens, and dual-path visual encoding—global (fast) and local (slow)—to jointly enhance localization accuracy and reasoning efficiency. Evaluated on multiple GUI benchmarks, the agent achieves state-of-the-art performance with significantly improved inference speed, demonstrating both high effectiveness and computational efficiency.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) show strong potential for interpreting and interacting with complex, pixel-rich Graphical User Interface (GUI) environments. However, building agents that are both efficient for high-level tasks and precise for fine-grained interactions remains challenging. GUI agents must perform routine actions efficiently while also handling tasks that demand exact visual grounding, yet existing approaches struggle when accuracy depends on identifying specific interface elements. These MLLMs also remain large and cannot adapt their reasoning depth to the task at hand. In this work, we introduce iSHIFT: Implicit Slow-fast Hybrid Inference with Flexible Tokens, a lightweight agent that integrates latent thinking (implicit chain-of-thought) with a perception control module. iSHIFT enables an MLLM to switch between a slow mode, which leverages detailed visual grounding for high precision and a fast mode that uses global cues for efficiency. Special perception tokens guide attention to relevant screen regions, allowing the model to decide both how to reason and where to focus. Despite its compact 2.5B size, iSHIFT matches state-of-the-art performance on multiple benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Builds efficient and precise GUI interaction agents
Adapts reasoning depth to task demands
Enables accurate visual grounding for interface elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight MLLM with adaptive slow-fast reasoning
Implicit chain-of-thought integrated with perception control
Perception tokens guide attention to relevant screen regions
S
Sarthak Mehrotra
Indian Institute of Technology, Bombay
S
Sairam V C Rebbapragada
Indian Institute of Technology, Hyderabad
M
Mani Hemanth Reddy Bonthu
Indian Institute of Technology, Hyderabad
Vineeth N Balasubramanian
Vineeth N Balasubramanian
Professor, Indian Institute of Technology, Hyderabad, India
Deep LearningMachine LearningComputer VisionExplainable AI