Proactive AI Adoption can be Threatening: When Help Backfires

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how proactive interventions by AI assistants in workplace tools—such as unsolicited suggestions or autonomous task execution—affect users’ adoption intentions. Grounded in self-affirmation theory and social exchange theory, it demonstrates that such proactivity triggers perceived self-threat, significantly reducing help acceptance, future usage intention, and performance expectations. Crucially, AI-initiated interventions are perceived as more threatening than equivalent human behaviors, and autonomous execution elicits stronger negative reactions than proactive suggestions. Using a preregistered, scenario-based dual-experiment design, two large-scale online surveys (N = 761, N = 571) empirically validate the proposed mechanism. This work is the first to identify “self-threat” as the core psychological barrier to adopting proactive AI and to differentiate the distinct impacts of intervention modalities. Findings provide both theoretical grounding and actionable design principles for human-centered AI assistant development.

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, reducing willingness to accept assistance, likelihood of future use, and performance expectancy. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between emph{offering} (suggesting help) and emph{providing} (acting automatically). In Study 1, AI help was more threatening than human help. Across both studies, anticipatory help increased perceived threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism explaining why proactive AI features may backfire and suggest design implications for AI initiative.
Problem

Research questions and friction points this paper is trying to address.

Proactive AI help causes self-threat in workplaces
Unsolicited AI assistance reduces adoption and performance
AI initiative-taking backfires by increasing perceived threat
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI help perceived as more threatening than human
Anticipatory help increases perceived threat and reduces adoption
Design implications suggested for AI initiative-taking
🔎 Similar Papers
No similar papers found.