The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk in Personalized AI Adoption

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how information environments—specifically risk versus ambiguity—affect users’ adoption of personalized AI under data breach threats. Through a between-subjects 2×3 experiment (N=610) integrating behavioral tasks, randomized controlled design, and economic decision-making, the research reveals for the first time that ambiguity, rather than privacy preferences alone, is a key driver of users’ avoidance of personalized AI. Findings show that under ambiguity, privacy threats significantly reduce AI adoption rates—further depressing uptake from an approximate baseline of 50%—an effect consistent across both sensitive and anonymized data contexts. Additionally, users systematically overstate their willingness to pay for privacy disclosure labels, yet privacy threats do not influence their subsequent bargaining behavior with algorithms.

Technology Category

Application Category

📝 Abstract
Privacy concerns significantly impact AI adoption, yet little is known about how information environments shape user responses to data leak threats. We conducted a 2 x 3 between-subjects experiment (N=610) examining how risk versus ambiguity about privacy leaks affects the adoption of AI personalization. Participants chose between standard and AI-personalized product baskets, with personalization requiring data sharing that could leak to pricing algorithms. Under risk (30% leak probability), we found no difference in AI adoption between privacy-threatening and neutral conditions (ca. 50% adoption). Under ambiguity (10-50% range), privacy threats significantly reduced adoption compared to neutral conditions. This effect holds for sensitive demographic data as well as anonymized preference data. Users systematically over-bid for privacy disclosure labels, suggesting strong demand for transparency institutions. Notably, privacy leak threats did not affect subsequent bargaining behavior with algorithms. Our findings indicate that ambiguity over data leaks, rather than only privacy preferences per se, drives avoidance behavior among users towards personalized AI.
Problem

Research questions and friction points this paper is trying to address.

privacy
AI adoption
ambiguity
personalization
data leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy ambiguity
AI personalization
data leakage risk
transparency demand
behavioral experiment
🔎 Similar Papers
No similar papers found.
A
Alexander Erlei
University of Göttingen
T
Tahir Abbas
Wageningen University and Research
K
Kilian Bizer
University of Göttingen
Ujwal Gadiraju
Ujwal Gadiraju
Associate Professor, Delft University of Technology
Human-centered AIHuman-AI InteractionCrowd ComputingHuman ComputationInformation Retrieval