Design Activity for Robot Faces: Evaluating Child Responses To Expressive Faces

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current child-robot facial designs are predominantly static, non-personalized, and adult-centered, limiting deep emotional engagement. To address this, we introduce a child-led participatory design paradigm: children aged 6–12 co-create personalized digital faces via hand-drawing, which are then rendered in real time onto robot faces; social intelligence perception is rigorously evaluated using the validated Perceived Social Intelligence (PSI) scale in a controlled comparative study. Results demonstrate that child-generated faces significantly enhance children’s perception of the robot’s social intelligence (p < 0.01), outperforming generic, pre-designed faces. This work pioneers creativity-driven, child-authored facial customization for social robots—filling a critical gap in user-centered, developmentally sensitive human–robot facial interface design. It provides both a novel methodological framework and empirical evidence supporting emotionally resonant, individualized interaction design for child–robot systems.

Technology Category

Application Category

📝 Abstract
Facial expressiveness plays a crucial role in a robot's ability to engage and interact with children. Prior research has shown that expressive robots can enhance child engagement during human-robot interactions. However, many robots used in therapy settings feature non-personalized, static faces designed with traditional facial feature considerations, which can limit the depth of interactions and emotional connections. Digital faces offer opportunities for personalization, yet the current landscape of robot face design lacks a dynamic, user-centered approach. Specifically, there is a significant research gap in designing robot faces based on child preferences. Instead, most robots in child-focused therapy spaces are developed from an adult-centric perspective. We present a novel study investigating the influence of child-drawn digital faces in child-robot interactions. This approach focuses on a design activity with children instructed to draw their own custom robot faces. We compare the perceptions of social intelligence (PSI) of two implementations: a generic digital face and a robot face, personalized using the user's drawn robot faces. The results of this study show the perceived social intelligence of a child-drawn robot was significantly higher compared to a generic face.
Problem

Research questions and friction points this paper is trying to address.

Evaluating child responses to expressive robot faces
Addressing lack of child-centered robot face design
Comparing social intelligence of generic vs child-drawn faces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Child-drawn digital faces for robots
Dynamic user-centered design approach
Enhanced social intelligence perception
🔎 Similar Papers
No similar papers found.
D
Denielle Oliva
the Department of Computer Science and Engineering, University of Nevada, Reno
J
Joshua Knight
the Department of Computer Science and Engineering, University of Nevada, Reno
T
Tyler Becker
the Department of Computer Science and Engineering, University of Nevada, Reno
H
Heather Amistani
the Department of Computer Science and Engineering, University of Nevada, Reno
M
Monica N. Nicolescu
the Department of Computer Science and Engineering, University of Nevada, Reno
David Feil-Seifer
David Feil-Seifer
Professor, University of Nevada, Reno
Artificial IntelligenceRoboticsHuman-Robot InteractionSocially Assistive RoboticsComputer