🤖 AI Summary
This study investigates how visual context modulates human affective perception of robotic tactile signals. Using a wearable vibrotactile sleeve delivering multi-intensity affective tactile stimuli, we conducted multimodal affective perception experiments integrating emotionally evocative video scenes with a rigorously controlled psychophysical paradigm. Results reveal a functional dissociation: visual context predominantly governs tactile valence (positive/negative judgment), whereas tactile signal intensity independently drives arousal perception; negative tactile stimuli amplify overall emotional intensity, while positive ones attenuate affective responses. We propose the “context-embedded tactile interaction” paradigm—the first systematic demonstration of both functional decoupling and dynamic coupling between vision and touch across affective dimensions. These findings provide critical empirical evidence and design principles for embodied human–robot affective interaction.
📝 Abstract
Affective interaction is not merely about recognizing emotions; it is an embodied, situated process shaped by context and co-created through interaction. In affective computing, the role of haptic feedback within dynamic emotional exchanges remains underexplored. This study investigates how situational emotional cues influence the perception and interpretation of haptic signals given by a robot. In a controlled experiment, 32 participants watched video scenarios in which a robot experienced either positive actions (such as being kissed), negative actions (such as being slapped) or neutral actions. After each video, the robot conveyed its emotional response through haptic communication, delivered via a wearable vibration sleeve worn by the participant. Participants rated the robot's emotional state-its valence (positive or negative) and arousal (intensity)-based on the video, the haptic feedback, and the combination of the two. The study reveals a dynamic interplay between visual context and touch. Participants' interpretation of haptic feedback was strongly shaped by the emotional context of the video, with visual context often overriding the perceived valence of the haptic signal. Negative haptic cues amplified the perceived valence of the interaction, while positive cues softened it. Furthermore, haptics override the participants' perception of arousal of the video. Together, these results offer insights into how situated haptic feedback can enrich affective human-robot interaction, pointing toward more nuanced and embodied approaches to emotional communication with machines.