Code2World: A GUI World Model via Renderable Code Generation

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GUI world models struggle to simultaneously achieve high visual fidelity and structural controllability, limiting agents’ capacity for forward-looking simulation. This work proposes a novel GUI world model that uses renderable HTML code as an intermediate representation, enabling next-state prediction through action-conditioned code generation. The approach is optimized via a visual feedback mechanism and rendering-aware reinforcement learning, combining supervised fine-tuning of vision-language models with reward signals derived from rendered outputs—specifically, visual-semantic fidelity and action consistency. This framework achieves, for the first time in UI prediction, both high-fidelity rendering and fine-grained structural control. The resulting model, Code2World-8B, matches the performance of GPT-5 and Gemini-3-Pro-Image on next-UI prediction tasks and improves navigation success rates by 9.5% over Gemini-2.5-Flash in AndroidWorld.

Technology Category

Application Category

📝 Abstract
Autonomous GUI agents interact with environments by perceiving interfaces and executing actions. As a virtual sandbox, the GUI World model empowers agents with human-like foresight by enabling action-conditioned prediction. However, existing text- and pixel-based approaches struggle to simultaneously achieve high visual fidelity and fine-grained structural controllability. To this end, we propose Code2World, a vision-language coder that simulates the next visual state via renderable code generation. Specifically, to address the data scarcity problem, we construct AndroidCode by translating GUI trajectories into high-fidelity HTML and refining synthesized code through a visual-feedback revision mechanism, yielding a corpus of over 80K high-quality screen-action pairs. To adapt existing VLMs into code prediction, we first perform SFT as a cold start for format layout following, then further apply Render-Aware Reinforcement Learning which uses rendered outcome as the reward signal by enforcing visual semantic fidelity and action consistency. Extensive experiments demonstrate that Code2World-8B achieves the top-performing next UI prediction, rivaling the competitive GPT-5 and Gemini-3-Pro-Image. Notably, Code2World significantly enhances downstream navigation success rates in a flexible manner, boosting Gemini-2.5-Flash by +9.5% on AndroidWorld navigation. The code is available at https://github.com/AMAP-ML/Code2World.
Problem

Research questions and friction points this paper is trying to address.

GUI world model
visual fidelity
structural controllability
renderable code generation
action-conditioned prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

renderable code generation
GUI world model
visual-feedback revision
render-aware reinforcement learning
AndroidCode
🔎 Similar Papers
No similar papers found.
Yuhao Zheng
Yuhao Zheng
University of Science and Technology of China
L
Li'an Zhong
Sun Yat-sen University
Y
Yi Wang
AMAP, Alibaba Group
Rui Dai
Rui Dai
Alibaba Group
machine learning
K
Kaikui Liu
AMAP, Alibaba Group
X
Xiangxiang Chu
AMAP, Alibaba Group
L
Linyuan Lv
University of Science and Technology of China
Philip Torr
Philip Torr
Professor, University of Oxford
Department of Engineering
Kevin Qinghong Lin
Kevin Qinghong Lin
University of Oxford; National U. of Singapore
Vision and LanguageVideo UnderstandingAI Agent