🤖 AI Summary
This study addresses the challenge of trust calibration in human-AI collaborative decision-making, aiming to mitigate both overreliance on and underutilization of AI systems. Through a controlled experiment, it systematically investigates the effects of single-step versus two-step decision protocols, the presence or absence of explanatory information, and the moderating roles of users’ domain knowledge and prior AI experience on trust and reliance behaviors. Integrating subjective trust ratings with behavioral metrics—such as adoption rate, switching rate, and overreliance rate—the research uncovers significant interaction effects between decision protocol and explanation mechanisms. Notably, it provides the first empirical validation that subjective trust and actual reliance constitute distinct dimensions that must be evaluated independently. Findings indicate that the two-step protocol does not significantly reduce overreliance, and the efficacy of explanations is moderated by both workflow design and users’ knowledge levels.
📝 Abstract
A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self-reports or as behavioral trust, that is, reliance. We examined the effects and interactions of (a) the type of decision workflow, (b) the presence of explanations, and (c) users'domain knowledge and prior AI experience. We compared reported trust, reliance (agreement rate and switch rate), and overreliance. Results showed no evidence that a 2-step setup reduces overreliance. The decision workflow also did not directly affect self-reported trust, but there was a crossover interaction effect with domain knowledge and explanations, suggesting that the effects of explanations alone may not generalize across workflow setups. Finally, our findings confirm that reported trust and reliance behavior are distinct constructs that should be evaluated separately in AI-assisted decision making.