Cognitive Trust in HRI: "Pay Attention to Me and I'll Trust You Even if You are Wrong"

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how robotic competence and attentiveness jointly shape human cognitive trust in human-robot interaction, specifically examining whether attentiveness can compensate for low competence. A 2×2 factorial experiment was conducted using a biomimetic robotic dog platform performing a search task, integrating behavioral observation with validated subjective trust scales. Results provide the first empirical evidence of an “affective compensation effect”: high attentiveness significantly mitigates trust erosion caused by low competence—yielding trust levels statistically indistinguishable from those observed under high competence. Conversely, under low attentiveness, low competence triggers a sharp decline in trust. These findings challenge the dominant competence-centric trust model and propose a dual-path trust formation mechanism—integrating both competence and attentiveness as independent yet interdependent determinants. The study thus advances theoretical understanding of trust in autonomous systems and offers empirically grounded design principles for developing trustworthy robots.

Technology Category

Application Category

📝 Abstract
Cognitive trust and the belief that a robot is capable of accurately performing tasks, are recognized as central factors in fostering high-quality human-robot interactions. It is well established that performance factors such as the robot's competence and its reliability shape cognitive trust. Recent studies suggest that affective factors, such as robotic attentiveness, also play a role in building cognitive trust. This work explores the interplay between these two factors that shape cognitive trust. Specifically, we evaluated whether different combinations of robotic competence and attentiveness introduce a compensatory mechanism, where one factor compensates for the lack of the other. In the experiment, participants performed a search task with a robotic dog in a 2x2 experimental design that included two factors: competence (high or low) and attentiveness (high or low). The results revealed that high attentiveness can compensate for low competence. Participants who collaborated with a highly attentive robot that performed poorly reported trust levels comparable to those working with a highly competent robot. When the robot did not demonstrate attentiveness, low competence resulted in a substantial decrease in cognitive trust. The findings indicate that building cognitive trust in human-robot interaction may be more complex than previously believed, involving emotional processes that are typically overlooked. We highlight an affective compensatory mechanism that adds a layer to consider alongside traditional competence-based models of cognitive trust.
Problem

Research questions and friction points this paper is trying to address.

Explores interplay between robot competence and attentiveness shaping cognitive trust
Evaluates if attentiveness compensates for low competence in human-robot interaction
Reveals affective compensatory mechanism beyond traditional competence-based trust models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robotic attentiveness compensates for low competence in trust.
High attentiveness equals high competence in cognitive trust.
Affective factors add complexity to traditional trust models.
🔎 Similar Papers
No similar papers found.
A
Adi Manor
Faculty of Data and Decision Sciences, Technion; milab, Reichman University
D
Dan Cohen
milab, Reichman University
Z
Ziv Keidar
milab, Reichman University
Avi Parush
Avi Parush
Israel Institute of Technology
Human Factors EngineeringHCIUsabilityUX
Hadas Erel
Hadas Erel
Head of social robots group, milab, Reichman University
Human-Robot InteractionHuman-Computer InteractionCognitive Psychology