🤖 AI Summary
Existing LLM-driven UI generation methods lack fine-grained modeling of task context and user preferences, resulting in suboptimal user-centeredness. This paper introduces the first crowdsourcing-informed UI generation framework that explicitly models and integrates fine-grained user preferences—including predictability, efficiency, and explorability—into the LLM generation pipeline to align outputs with both task requirements and user intent. Our approach combines structured preference modeling, preference-guided prompt engineering, and inference optimization. We conduct a 78-participant user study in the image editing domain. Results demonstrate statistically significant improvements in UI–user intent alignment over baseline LLM-based approaches, validating the framework’s effectiveness and practical utility.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable potential across various design domains, including user interface (UI) generation. However, current LLMs for UI generation tend to offer generic solutions that lack a nuanced understanding of task context and user preferences. We present CrowdGenUI, a framework that enhances LLM-based UI generation with crowdsourced user preferences. This framework addresses the limitations by guiding LLM reasoning with real user preferences, enabling the generation of UI widgets that reflect user needs and task-specific requirements. We evaluate our framework in the image editing domain by collecting a library of 720 user preferences from 50 participants, covering preferences such as predictability, efficiency, and explorability of various UI widgets. A user study (N=78) demonstrates that UIs generated with our preference-guided framework can better match user intentions compared to those generated by LLMs alone, highlighting the effectiveness of our proposed framework. We further discuss the study findings and present insights for future research on LLM-based user-centered UI generation.