🤖 AI Summary
Designers lack a bidirectional mechanism to translate design requirements into large language model (LLM) behaviors—and vice versa—hindering human-centered LLM integration in UX practice.
Method: We propose the “Designer-Centric Adaptation” paradigm, emphasizing dynamic, human-in-the-loop shaping and responsive refinement of LLM behavior. Based on this, we developed Canvil—a Figma plugin enabling real-time parameter tuning, feedback-driven closed loops, and collaborative iterative design.
Contribution/Results: Through design probes, co-design workshops with six designer pairs, and qualitative thematic coding, we demonstrate that designers effectively optimize both LLM adaptation strategies and interface designs using Canvil. We synthesize a cross-role collaboration workflow and establish the first systematic methodology for human-centered LLM adaptation—bridging the longstanding gap between UX design and LLM engineering, and advancing practice-oriented, human-centered LLM application development.
📝 Abstract
Advancements in large language models (LLMs) are sparking a proliferation of LLM-powered user experiences (UX). In product teams, designers often craft UX to meet user needs, but it is unclear how they engage with LLMs as a novel design material. Through a formative study with 12 designers, we find that designers seek a translational process that enables design requirements to shape and be shaped by LLM behavior, motivating a need for designerly adaptation to facilitate this translation. We then built Canvil, a Figma widget that operationalizes designerly adaptation. We used Canvil as a probe to study designerly adaptation in a group-based design study (6 groups, N=17), finding that designers constructively iterated on both adaptation approaches and interface designs to enhance end-user interaction with LLMs. Furthermore, designers identified promising collaborative workflows for designerly adaptation. Our work opens new avenues for processes and tools that foreground designers' human-centered expertise when developing LLM-powered applications.