🤖 AI Summary
To address the need for automated, actionable feedback in UI design evaluation, this paper proposes an end-to-end method for generating aligned image-text design critiques: given a UI screenshot and design guidelines, it jointly outputs natural-language issue descriptions and corresponding pixel-accurate bounding boxes. Methodologically, we introduce the first vision-guided, text–spatial co-iterative refinement framework—integrating few-shot visual exemplars, multi-turn semantic textual correction, and geometric bounding-box refinement—orchestrated by Gemini-1.5-Pro and GPT-4o. Compared to prior approaches, our method significantly improves critique professionalism and visual grounding. Expert evaluation confirms its output quality approaches human-level performance, narrowing the human–machine gap by up to 50% on key metrics. Moreover, the framework achieves state-of-the-art results on open-vocabulary object detection and attribute recognition tasks.
📝 Abstract
Feedback is crucial for every design process, such as user interface (UI) design, and automating design critiques can significantly improve the efficiency of the design workflow. Although existing multimodal large language models (LLMs) excel in many tasks, they often struggle with generating high-quality design critiques -- a complex task that requires producing detailed design comments that are visually grounded in a given design's image. Building on recent advancements in iterative refinement of text output and visual prompting methods, we propose an iterative visual prompting approach for UI critique that takes an input UI screenshot and design guidelines and generates a list of design comments, along with corresponding bounding boxes that map each comment to a specific region in the screenshot. The entire process is driven completely by LLMs, which iteratively refine both the text output and bounding boxes using few-shot samples tailored for each step. We evaluated our approach using Gemini-1.5-pro and GPT-4o, and found that human experts generally preferred the design critiques generated by our pipeline over those by the baseline, with the pipeline reducing the gap from human performance by 50% for one rating metric. To assess the generalizability of our approach to other multimodal tasks, we applied our pipeline to open-vocabulary object and attribute detection, and experiments showed that our method also outperformed the baseline.