🤖 AI Summary
This study addresses the frequent disconnect between design metaphors employed by platforms and users’ actual experiences, as well as the lack of systematic evaluation methods in this domain. It proposes a novel comparative framework that juxtaposes designer-intended metaphors with user-generated ones, integrating mixed-method approaches—including metaphor extraction, historical web content analysis, and user surveys—to examine 21 official design metaphors and 554 user metaphors across three major platforms (ChatGPT, Twitter, and YouTube) since their launch. A user rating mechanism is introduced to quantitatively measure resonance levels. Findings reveal that design metaphors often misalign with user cognition, and even when form matches, they do not necessarily elicit broad resonance—offering a new pathway for evaluating user experience and optimizing metaphor-driven design.
📝 Abstract
Metaphors enable designers to communicate their ideal user experience for platforms. Yet, we often do not know if these design metaphors match users' actual experiences. In this work, we compare design and user metaphors across three different platforms: ChatGPT, Twitter, and YouTube. We build on prior methods to elicit 554 user metaphors, as well as ratings on how well each metaphor describes users' experiences. We then identify 21 design metaphors by analyzing each platform's historical web presence since their launch date. We find that design metaphors often do not match the metaphors that users use to describe their experiences. Even when design and user metaphors do match, the metaphors do not always resonate universally. Through these findings, we highlight how comparing design and user metaphors can help to evaluate and refine metaphors for user experience.