🤖 AI Summary
Existing AI-based search methods for mobile UI creative design retrieval suffer from limited practicality due to their reliance on manually annotated interface structural information and inability to jointly model user-group adaptability and affective semantics.
Method: We propose the first end-to-end, semantics-driven retrieval framework for UI images—systematically modeling core mobile UI design semantics (e.g., user personas and emotional intent) while eliminating dependence on view hierarchies; achieving vision–semantics alignment via a multimodal large language model (MLLM) without requiring metadata annotations.
Contribution/Results: We construct and publicly release the first UI semantic annotation dataset. Experiments demonstrate significant improvements over baselines across multiple automated metrics. Human-centered evaluation with professional designers shows a 42% increase in retrieval relevance and a 37% improvement in contextual alignment.
📝 Abstract
Inspirational search, the process of exploring designs to inform and inspire new creative work, is pivotal in mobile user interface (UI) design. However, exploring the vast space of UI references remains a challenge. Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps. Additionally, these models typically require metadata like view hierarchies, limiting their practical use. We used a multimodal large language model (MLLM) to extract and interpret semantics from mobile UI images. We identified key UI semantics through a formative study and developed a semantic-based UI search system. Through computational and human evaluations, we demonstrate that our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience. We enhance the understanding of mobile UI design semantics and highlight MLLMs' potential in inspirational search, providing a rich dataset of UI semantics for future studies.