Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the evolutionary mechanisms underlying users’ concerns regarding safety and privacy in conversational AI platforms. Method: Leveraging 2.5 million Reddit posts, it integrates qualitative content analysis with quantitative regression modeling to empirically characterize, at scale, the dynamicity and context-dependency of user privacy perceptions across data collection, usage, and retention stages. Contribution/Results: The study identifies core concern dimensions and behavioral archetypes (e.g., proactive-protective vs. convenience-prioritizing users), revealing significant attitudinal shifts following major technological events. It innovatively proposes a four-dimensional collaborative governance framework—encompassing users, platform providers, enterprises, and policymakers—to enhance transparency, strengthen user data agency, and rebuild trust through actionable, evidence-informed strategies.

Technology Category

Application Category

📝 Abstract
The widespread adoption of conversational AI platforms has introduced new security and privacy risks. While these risks and their mitigation strategies have been extensively researched from a technical perspective, users' perceptions of these platforms' security and privacy remain largely unexplored. In this paper, we conduct a large-scale analysis of over 2.5M user posts from the r/ChatGPT Reddit community to understand users' security and privacy concerns and attitudes toward conversational AI platforms. Our qualitative analysis reveals that users are concerned about each stage of the data lifecycle (i.e., collection, usage, and retention). They seek mitigations for security vulnerabilities, compliance with privacy regulations, and greater transparency and control in data handling. We also find that users exhibit varied behaviors and preferences when interacting with these platforms. Some users proactively safeguard their data and adjust privacy settings, while others prioritize convenience over privacy risks, dismissing privacy concerns in favor of benefits, or feel resigned to inevitable data sharing. Through qualitative content and regression analysis, we discover that users' concerns evolve over time with the evolving AI landscape and are influenced by technological developments and major events. Based on our findings, we provide recommendations for users, platforms, enterprises, and policymakers to enhance transparency, improve data controls, and increase user trust and adoption.
Problem

Research questions and friction points this paper is trying to address.

Understanding user concerns about AI platform security and privacy
Analyzing evolving user attitudes towards data lifecycle risks
Recommending improvements for transparency and data control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale analysis of 2.5M Reddit posts
Qualitative content and regression analysis
Recommendations for transparency and data controls
🔎 Similar Papers
No similar papers found.