🤖 AI Summary
Existing GUI world models struggle to simultaneously achieve high visual fidelity and structural controllability, limiting agents’ capacity for forward-looking simulation. This work proposes a novel GUI world model that uses renderable HTML code as an intermediate representation, enabling next-state prediction through action-conditioned code generation. The approach is optimized via a visual feedback mechanism and rendering-aware reinforcement learning, combining supervised fine-tuning of vision-language models with reward signals derived from rendered outputs—specifically, visual-semantic fidelity and action consistency. This framework achieves, for the first time in UI prediction, both high-fidelity rendering and fine-grained structural control. The resulting model, Code2World-8B, matches the performance of GPT-5 and Gemini-3-Pro-Image on next-UI prediction tasks and improves navigation success rates by 9.5% over Gemini-2.5-Flash in AndroidWorld.
📝 Abstract
Autonomous GUI agents interact with environments by perceiving interfaces and executing actions. As a virtual sandbox, the GUI World model empowers agents with human-like foresight by enabling action-conditioned prediction. However, existing text- and pixel-based approaches struggle to simultaneously achieve high visual fidelity and fine-grained structural controllability. To this end, we propose Code2World, a vision-language coder that simulates the next visual state via renderable code generation. Specifically, to address the data scarcity problem, we construct AndroidCode by translating GUI trajectories into high-fidelity HTML and refining synthesized code through a visual-feedback revision mechanism, yielding a corpus of over 80K high-quality screen-action pairs. To adapt existing VLMs into code prediction, we first perform SFT as a cold start for format layout following, then further apply Render-Aware Reinforcement Learning which uses rendered outcome as the reward signal by enforcing visual semantic fidelity and action consistency. Extensive experiments demonstrate that Code2World-8B achieves the top-performing next UI prediction, rivaling the competitive GPT-5 and Gemini-3-Pro-Image. Notably, Code2World significantly enhances downstream navigation success rates in a flexible manner, boosting Gemini-2.5-Flash by +9.5% on AndroidWorld navigation. The code is available at https://github.com/AMAP-ML/Code2World.