Supporting Informed Self-Disclosure: Design Recommendations for Presenting AI-Estimates of Privacy Risks to Users

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the tendency of users to over-disclose personal information when discussing sensitive topics anonymously, often due to underestimating AI-driven re-identification risks. To enhance user awareness without inducing excessive self-censorship, the work proposes a novel visualization and interaction method for communicating quantified privacy risk estimates (PREs) to lay users. Integrating design fiction and comic-style storyboarding, the approach leverages natural language processing to generate contextual risk alerts. An online study with 44 Reddit users informed the formulation of four design principles that effectively improve comprehension of privacy risks while preserving expressive freedom. This research represents the first application of design fiction to the communication of PREs, striking a balance between informed decision-making and open discourse in anonymous online environments.

Technology Category

Application Category

📝 Abstract
People candidly discuss sensitive topics online under the perceived safety of anonymity; yet, for many, this perceived safety is tenuous, as miscalibrated risk perceptions can lead to over-disclosure. Recent advances in Natural Language Processing (NLP) afford an unprecedented opportunity to present users with quantified disclosure-based re-identification risk (i.e.,"population risk estimates", PREs). How can PREs be presented to users in a way that promotes informed decision-making, mitigating risk without encouraging unnecessary self-censorship? Using design fictions and comic-boarding, we story-boarded five design concepts for presenting PREs to users and evaluated them through an online survey with N = 44 Reddit users. We found participants had detailed conceptions of how PREs may impact risk awareness and motivation, but envisioned needing additional context and support to effectively interpret and act on risks. We distill our findings into four key design recommendations for how best to present users with quantified privacy risks to support informed disclosure decision-making.
Problem

Research questions and friction points this paper is trying to address.

privacy risk
self-disclosure
re-identification risk
AI estimates
informed decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy risk estimation
AI-assisted disclosure
design fiction
re-identification risk
user-centered privacy
🔎 Similar Papers
No similar papers found.
I
Isadora Krsek
Carnegie Mellon University
M
Meryl Ye
Carnegie Mellon University
W
Wei Xu
Georgia Institute of Technology
Alan Ritter
Alan Ritter
Georgia Institute of Technology
Natural Language ProcessingMachine LearningArtificial IntelligenceInformation Extraction
L
Laura A. Dabbish
Carnegie Mellon University
Sauvik Das
Sauvik Das
Associate Professor, Carnegie Mellon University
Human-Computer InteractionPrivacySecurityUsable Privacy and Security