🤖 AI Summary
Existing UI-to-code translation methods suffer from inadequate modeling of spatial layout and visual intent in design-to-code conversion. Method: This paper proposes a modular multimodal agent framework that performs end-to-end translation in three sequential stages—perception, planning, and generation. It introduces a hierarchical layout reasoning mechanism and an adaptive prompt generation strategy, coupled with a scalable image-to-code synthesis data engine to enhance generalization. The approach integrates fine-tuned vision-language models (VLMs), engineering-prior-driven layout planning, and multi-agent collaborative decision-making. Contribution/Results: On mainstream benchmarks, the method achieves state-of-the-art performance in layout accuracy, structural coherence, and code correctness. It significantly improves translation accuracy, interpretability, and engineering practicality—bridging the gap between design fidelity and production-ready frontend implementation.
📝 Abstract
Automating the transformation of user interface (UI) designs into front-end code holds significant promise for accelerating software development and democratizing design workflows. While recent large language models (LLMs) have demonstrated progress in text-to-code generation, many existing approaches rely solely on natural language prompts, limiting their effectiveness in capturing spatial layout and visual design intent. In contrast, UI development in practice is inherently multimodal, often starting from visual sketches or mockups. To address this gap, we introduce a modular multi-agent framework that performs UI-to-code generation in three interpretable stages: grounding, planning, and generation. The grounding agent uses a vision-language model to detect and label UI components, the planning agent constructs a hierarchical layout using front-end engineering priors, and the generation agent produces HTML/CSS code via adaptive prompt-based synthesis. This design improves robustness, interpretability, and fidelity over end-to-end black-box methods. Furthermore, we extend the framework into a scalable data engine that automatically produces large-scale image-code pairs. Using these synthetic examples, we fine-tune and reinforce an open-source VLM, yielding notable gains in UI understanding and code quality. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in layout accuracy, structural coherence, and code correctness. Our code is made publicly available at https://github.com/leigest519/ScreenCoder.