🤖 AI Summary
This study addresses algorithmic bias and insufficient inclusivity in affective computing—stemming from overreliance on facial recognition while neglecting bodily expression and cultural diversity—by proposing an embodied, co-creative affective AI paradigm. Methodologically, it integrates MoveNet-based high-precision full-body pose tracking with a multi-recommender AI system to enable real-time, dynamic emotion modeling. Across three collaborative task phases—teaching, exploration, and cosmic—multicultural participants jointly define emotional categories, facilitating bottom-up construction of emotion semantics. Key contributions include: (1) the first empirical validation of full-body somatosensory and multi-user collaborative affective interaction; (2) significant improvements in user agency and cross-cultural adaptability; and (3) a novel multimedia affective computing framework that explicitly integrates ethical considerations, inclusivity, and model interpretability.
📝 Abstract
Commonaiverse is an interactive installation exploring human emotions through full-body motion tracking and real-time AI feedback. Participants engage in three phases: Teaching, Exploration and the Cosmos Phase, collaboratively expressing and interpreting emotions with the system. The installation integrates MoveNet for precise motion tracking and a multi-recommender AI system to analyze emotional states dynamically, responding with adaptive audiovisual outputs. By shifting from top-down emotion classification to participant-driven, culturally diverse definitions, we highlight new pathways for inclusive, ethical affective computing. We discuss how this collaborative, out-of-the-box approach pushes multimedia research beyond single-user facial analysis toward a more embodied, co-created paradigm of emotional AI. Furthermore, we reflect on how this reimagined framework fosters user agency, reduces bias, and opens avenues for advanced interactive applications.