🤖 AI Summary
Facial vibrotactile spatial localization capability—particularly on the cheeks—remains unquantified, hindering its potential as an alternative sensory channel for upper-limb amputees or individuals with spinal cord injury. Method: We employed a miniature vibrotactile actuator array and standard psychophysical localization paradigms to systematically measure spatial discrimination accuracy and confusion patterns across the cheek region. Contribution/Results: Our results demonstrate that the cheek exhibits robust spatial encoding capacity, outperforming most other body sites in localization accuracy. This superior performance stems from its high mechanosensitivity and proximity to hand-representing cortical areas in the somatosensory homunculus. To our knowledge, this is the first systematic quantification of facial vibrotactile spatial resolution. The findings fill a critical gap in tactile interface research and provide essential empirical data and foundational parameters—such as localization accuracy, confusion matrices, and optimal actuator spacing—for designing compact, wearable facial haptic interfaces.
📝 Abstract
The face remains relatively unexplored as a target region for haptic feedback, despite providing a considerable surface area consisting of highly sensitive skin. There are promising applications for facial haptic feedback, especially in cases of severe upper limb loss or spinal cord injury, where the face is typically less impacted than other body parts. Moreover, the neural representation of the face is adjacent to that of the hand, and phantom maps have been discovered between the fingertips and the cheeks. However, there is a dearth of compact devices for facial haptic feedback, and vibrotactile stimulation, a common modality of haptic feedback, has not been characterized for localization acuity on the face. We performed a localization experiment on the cheek, with an arrangement of off-the-shelf coin vibration motors. The study follows the methods of prior work studying other skin regions, in which participants attempt to identify the sites of discrete vibrotactile stimuli. We intend for our results to inform the future development of systems using vibrotactile feedback to convey information via the face.