π€ AI Summary
This work addresses the limitations of existing mobile GUI agents, which predominantly rely on reactive decision-making and struggle with long-horizon tasks. To overcome this, we propose a world model framework grounded in textual sketches that predicts post-action GUI states by generating task-relevant textual descriptions. The framework incorporates an imagination-based planning mechanism to refine action selection and introduces a permutation-invariant learning strategy that preserves spatial awareness while enabling efficient state prediction. Evaluated on the Android World benchmark, our method achieves state-of-the-art performance, improving task success rate by 5.25% and accurately forecasting key GUI elements, thereby significantly enhancing the agentβs capacity for foresighted planning.
π Abstract
Mobile GUI agents have shown strong potential in real-world automation and practical applications. However, most existing agents remain reactive, making decisions mainly from current screen, which limits their performance on long-horizon tasks. Building a world model from repeated interactions enables forecasting action outcomes and supports better decision making for mobile GUI agents. This is challenging because the model must predict post-action states with spatial awareness while remaining efficient enough for practical deployment. In this paper, we propose MobileDreamer, an efficient world-model-based lookahead framework to equip the GUI agents based on the future imagination provided by the world model. It consists of textual sketch world model and rollout imagination for GUI agent. Textual sketch world model forecasts post-action states through a learning process to transform digital images into key task-related sketches, and designs a novel order-invariant learning strategy to preserve the spatial information of GUI elements. The rollout imagination strategy for GUI agent optimizes the action-selection process by leveraging the prediction capability of world model. Experiments on Android World show that MobileDreamer achieves state-of-the-art performance and improves task success by 5.25%. World model evaluations further verify that our textual sketch modeling accurately forecasts key GUI elements.