ArrayTac: A tactile display for simultaneous rendering of shape, stiffness and friction

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of simultaneously rendering multidimensional tactile cues—such as shape, stiffness, and friction—which existing tactile displays struggle to deliver with high fidelity, thereby limiting perceptual realism. The authors propose a 4×4 piezoelectric-driven tactile display that employs a three-stage micro-lever mechanism for displacement amplification and integrates Hall-effect sensors for closed-loop control. This design enables, for the first time, high-fidelity synchronous rendering of shape, stiffness, and friction within a single device. Building upon this hardware, the work further introduces an end-to-end vision-to-tactile translation framework and demonstrates a real-time tele-tactile palpation system operating across distances exceeding 1,000 kilometers. User studies show that first-time participants accurately identify object physical properties, and untrained volunteers achieved 100% accuracy in identifying and precisely localizing both the number and type of tumors in a breast phantom during remote trials.

Technology Category

Application Category

📝 Abstract
Human-computer interaction in the visual and auditory domains has achieved considerable maturity, yet machine-to-human tactile feedback remains underdeveloped. Existing tactile displays struggle to simultaneously render multiple tactile dimensions, such as shape, stiffness, and friction, which limits the realism of haptic simulation. Here, we present ArrayTac, a piezoelectric-driven tactile display capable of simultaneously rendering shape, stiffness, and friction to reproduce realistic haptic signals. The system comprises a 4x4 array of 16 actuator units, each employing a three-stage micro-lever mechanism to amplify the micrometer-scale displacement of the piezoelectric element, with Hall sensor-based closed-loop control at the end effector to enhance response speed and precision. We further implement two end-to-end pipelines: 1) a vision-to-touch framework that converts visual inputs into tactile signals using multimodal foundation models, and 2) a real-time tele-palpation system operating over distances of several thousand kilometers. In user studies, first-time participants accurately identify object shapes and physical properties with high success rates. In a tele-palpation experiment over 1,000km, untrained volunteers correctly identified both the number and type of tumors in a breast phantom with 100% accuracy and precisely localized their positions. The system pioneers a new pathway for high-fidelity haptic feedback by introducing the unprecedented capability to simultaneously render an object's shape, stiffness, and friction, delivering a holistic tactile experience that was previously unattainable.
Problem

Research questions and friction points this paper is trying to address.

tactile display
haptic feedback
multimodal rendering
shape perception
stiffness and friction
Innovation

Methods, ideas, or system contributions that make the work stand out.

tactile display
multimodal haptics
piezoelectric actuation
closed-loop control
tele-palpation
🔎 Similar Papers
No similar papers found.
T
Tianhai Liang
IIIS, Tsinghua University, Beijing & 100084, China.
S
Shiyi Guo
IIIS, Tsinghua University, Beijing & 100084, China.
B
Baiye Cheng
EIC, Huazhong University of Science and Technology, Wuhan & 430074, China.
Zhengrong Xue
Zhengrong Xue
IIIS, Tsinghua University
Robot LearningRobotic Manipulation
H
Han Zhang
IIIS, Tsinghua University, Beijing & 100084, China.
Huazhe Xu
Huazhe Xu
Tsinghua University
Embodied AIReinforcement LearningComputer VisionDeep Learning