Examining the Effect of Explanations of AI Privacy Redaction in AI-mediated Interactions

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of trust formation in AI-mediated interactions involving sensitive privacy content, where users struggle to trust systems due to a lack of understanding of automated redaction behaviors. The authors present a verifiable AI system that removes sensitive information from messages and generates explanations at varying levels of granularity. Through a controlled user study and quantitative analysis (using p-values and Cohen’s d/f effect sizes), they investigate how such explanations influence trust. Results show that explanations significantly enhance users’ perceived effectiveness of privacy protection (p<0.05, d≈0.3). Moreover, when contextual redactions are dense, users rely more heavily on—and find greater utility in—these explanations (p<0.05, f≈0.2). The findings highlight critical interactions among explanation granularity, redaction density, and individual differences (e.g., age, AI familiarity), advocating for adaptive, context-aware explanation mechanisms.

Technology Category

Application Category

📝 Abstract
AI-mediated communication is increasingly being utilized to help facilitate interactions; however, in privacy sensitive domains, an AI mediator has the additional challenge of considering how to preserve privacy. In these contexts, a mediator may redact or withhold information, raising questions about how users perceive these interventions and whether explanations of system behavior can improve trust. In this work, we investigate how explanations of redaction operations can affect user trust in AI-mediated communication. We devise a scenario where a validated system removes sensitive content from messages and generates explanations of varying detail to communicate its decisions to recipients. We then conduct a user study with $180$ participants that studies how user trust and preferences vary for cases with different amounts of redacted content and different levels of explanation detail. Our results show that participants believed our system was more effective at preserving privacy when explanations were provided ($p<0.05$, Cohen's $d \approx 0.3$). We also found that contextual factors had an impact; participants relied more on explanations and found them more helpful when the system performed extensive redactions ($p<0.05$, Cohen's $f \approx 0.2$). We also found that explanation preferences depended on individual differences as well, and factors such as age and baseline familiarity with AI affected user trust in our system. These findings highlight the importance and challenge of balancing transparency and privacy in AI-mediated communications and suggest that adaptive, context-aware explanations are essential for designing privacy-aware, trustworthy AI systems.
Problem

Research questions and friction points this paper is trying to address.

AI-mediated communication
privacy redaction
user trust
explanations
transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-mediated communication
privacy redaction
explanation design
user trust
context-aware transparency
🔎 Similar Papers
No similar papers found.