🤖 AI Summary
This work proposes the first meta-design framework for generative facial expression interfaces (GPFEI), addressing key challenges in controllability, consistency, and contextual alignment when agents generate facial expressions at runtime. Centered on designers, GPFEI integrates character identity, context-to-expression mapping rules, and a semantic labeling system to support template creation, rule definition, and interactive iteration. Leveraging generative design methodologies and a prototype tool, the framework significantly enhances the controllability and consistency of expression generation. Qualitative evaluation with twelve designers validates its effectiveness and highlights the need for structured visual mechanisms and lightweight interpretability to support design workflows.
📝 Abstract
This work investigates generative facial expression interfaces for intelligent agents from a meta-design perspective. We propose the Generative Personalized Facial Expression Interface (GPFEI) framework, which organizes rule-bounded spaces, character identity, and context--expression mapping to address challenges of control, coherence, and alignment in run-time facial expression generation. To operationalize this framework, we developed GenFaceUI, a proof-of-concept tool that enables designers to create templates, apply semantic tags, define rules, and iteratively test outcomes. We evaluated the tool through a qualitative study with twelve designers. The results show perceived gains in controllability and consistency, while revealing needs for structured visual mechanisms and lightweight explanations. These findings provide a conceptual framework, a proof-of-concept tool, and empirical insights that highlight both opportunities and challenges for advancing generative facial expression interfaces within a broader meta-design paradigm.