🤖 AI Summary
This study investigates how perceived urgency in human–AI interaction influences users’ self-confidence and sense of efficacy, thereby compromising long-term decision quality and system sustainability. Through a controlled experiment integrating behavioral metrics and subjective assessments, the performance of 30 participants was analyzed under varying levels of time pressure. The findings reveal, for the first time, that while urgency does not significantly affect trust in AI, it markedly undermines users’ self-confidence. Importantly, gradually introducing AI collaboration effectively mitigates this adverse effect and enhances user confidence. These results provide critical human factors insights for designing human–AI systems, underscoring the importance of avoiding abrupt AI deployment and instead advocating for incremental, collaborative integration mechanisms.
📝 Abstract
Studies show that interactions with an AI system fosters trust in human users towards AI. An often overlooked element of such interaction dynamics is the (sense of) urgency when the human user is prompted by an AI agent, e.g., for advice or guidance. In this paper, we show that although the presence of urgency in human-AI interactions does not affect the trust in AI, it may be detrimental to the human user's self-confidence and self-efficacy. In the long run, the loss of confidence may lead to performance loss, suboptimal decisions, human errors, and ultimately, unsustainable AI systems. Our evidence comes from an experiment with 30 human participants. Our results indicate that users may feel more confident in their work when they are eased into the human-AI setup rather than exposed to it without preparation. We elaborate on the implications of this finding for software engineers and decision-makers.