Acceptability of AI Assistants for Privacy: Perceptions of Experts and Users on Personalized Privacy Assistants

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the acceptability of Personalized Privacy Assistants (PPAs)—AI agents that autonomously execute privacy decisions based on user-defined preferences—to reduce cognitive load and improve privacy consistency. Using qualitative focus groups with 11 domain experts and 26 potential end users, it identifies four critical dimensions shaping PPA acceptance: source credibility of privacy information, effectiveness of oversight mechanisms, users’ digital literacy, and market structure characteristics. Extending the classical Technology Acceptance Model, the study proposes a novel governance framework for PPAs that integrates human-centered design principles with institutional coordination. The findings provide both theoretical grounding and empirical evidence to inform the design of privacy-enhancing AI systems and related policy development.

Technology Category

Application Category

📝 Abstract
Individuals increasingly face an overwhelming number of tasks and decisions. To cope with the new reality, there is growing research interest in developing intelligent agents that can effectively assist people across various aspects of daily life in a tailored manner, with privacy emerging as a particular area of application. Artificial intelligence (AI) assistants for privacy, such as personalized privacy assistants (PPAs), have the potential to automatically execute privacy decisions based on users' pre-defined privacy preferences, sparing them the mental effort and time usually spent on each privacy decision. This helps ensure that, even when users feel overwhelmed or resigned about privacy, the decisions made by PPAs still align with their true preferences and best interests. While research has explored possible designs of such agents, user and expert perspectives on the acceptability of such AI-driven solutions remain largely unexplored. In this study, we conducted five focus groups with domain experts (n = 11) and potential users (n = 26) to uncover key themes shaping the acceptance of PPAs. Factors influencing the acceptability of AI assistants for privacy include design elements (such as information sources used by the agent), external conditions (such as regulation and literacy education), and systemic conditions (e.g., public or market providers and the need to avoid monopoly) to PPAs. These findings provide theoretical extensions to technology acceptance models measuring PPAs, insights on design, and policy implications for PPAs, as well as broader implications for the design of AI assistants.
Problem

Research questions and friction points this paper is trying to address.

Investigating expert and user acceptance of AI privacy assistants
Exploring factors influencing adoption of personalized privacy assistants
Understanding design and policy implications for AI privacy tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-driven personalized privacy assistants automating decisions
Focus groups with experts and users for acceptance
Design, external, and systemic conditions influence acceptability
🔎 Similar Papers
No similar papers found.