🤖 AI Summary
Existing multimodal large language models (MLLMs) suffer from a scarcity of high-quality, real-world multimodal data for UI code generation. Method: We introduce the first large-scale, real-world parallel corpus for high-fidelity UI design-to-web frontend code generation, featuring fine-grained visual layout annotations and HTML/CSS structural alignment labels. We propose a neural scorer–based automated data cleaning and quality filtering pipeline that integrates Common Crawl–sourced data collection, neural quality assessment, and cross-modal alignment annotation. Contribution/Results: The corpus comprises over 2,000 high-quality design–code pairs (with ongoing expansion), significantly improving MLLM performance in code accuracy and layout fidelity. This work fills a critical gap by providing the first publicly available, high-quality training dataset specifically tailored for UI code generation, enabling robust, layout-aware multimodal modeling.
📝 Abstract
Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams. Currently, prior research has accomplished the objective of generating UI code from rudimentary design visions or sketches through designing deep neural networks. Inspired by the groundbreaking advancements achieved by Multimodal Large Language Models (MLLMs), the automatic generation of UI code from high-fidelity design images is now emerging as a viable possibility. Nevertheless, our investigation reveals that existing MLLMs are hampered by the scarcity of authentic, high-quality, and large-scale datasets, leading to unsatisfactory performance in automated UI code generation. To mitigate this gap, we present a novel dataset, termed VISION2UI, extracted from real-world scenarios, augmented with comprehensive layout information, tailored specifically for finetuning MLLMs in UI code generation. Specifically, this dataset is derived through a series of operations, encompassing collecting, cleaning, and filtering of the open-source Common Crawl dataset. In order to uphold its quality, a neural scorer trained on labeled samples is utilized to refine the data, retaining higher-quality instances. Ultimately, this process yields a dataset comprising 2,000 (Much more is coming soon) parallel samples encompassing design visions and UI code. The dataset is available at https://huggingface.co/datasets/xcodemind/vision2ui.