Not My Agent, Not My Boundary? Elicitation of Personal Privacy Boundaries in AI-Delegated Information Sharing

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of enabling AI agents to recognize individual, dynamic, and context-dependent privacy boundaries. We propose an AI-driven discriminative experimental paradigm integrating a between-subjects design, multi-context modeling, and quantitative behavioral analysis, collecting 1,681 privacy-boundary judgments from 169 participants across 61 ecologically valid scenarios. Innovatively, we embed privacy preferences directly into real-time data streams, formalizing fine-grained, computationally tractable privacy boundaries as a novel alignment objective for AI systems. Results demonstrate that AI delegation significantly increases individual privacy sensitivity while reducing inter-participant consensus; moreover, communication role (e.g., notifier vs. requester) exerts a main effect on acceptable disclosure levels. This work establishes both a theoretical framework and empirical foundation for developing privacy-aware AI systems that respect individual heterogeneity.

Technology Category

Application Category

📝 Abstract
Aligning AI systems with human privacy preferences requires understanding individuals' nuanced disclosure behaviors beyond general norms. Yet eliciting such boundaries remains challenging due to the context-dependent nature of privacy decisions and the complex trade-offs involved. We present an AI-powered elicitation approach that probes individuals' privacy boundaries through a discriminative task. We conducted a between-subjects study that systematically varied communication roles and delegation conditions, resulting in 1,681 boundary specifications from 169 participants for 61 scenarios. We examined how these contextual factors and individual differences influence the boundary specification. Quantitative results show that communication roles influence individuals' acceptance of detailed and identifiable disclosure, AI delegation and individuals' need for privacy heighten sensitivity to disclosed identifiers, and AI delegation results in less consensus across individuals. Our findings highlight the importance of situating privacy preference elicitation within real-world data flows. We advocate using nuanced privacy boundaries as an alignment goal for future AI systems.
Problem

Research questions and friction points this paper is trying to address.

Eliciting personal privacy boundaries in AI-delegated information sharing contexts
Understanding how communication roles and delegation affect privacy decisions
Examining individual differences in privacy boundary specifications across scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-powered elicitation probes privacy boundaries discriminatively
Systematic study varies communication roles and delegation conditions
Nuanced privacy boundaries serve as AI alignment goal
🔎 Similar Papers
No similar papers found.
B
Bingcan Guo
Department of Human Centered Design & Engineering, University of Washington, United States
E
Eryue Xu
School of Information Sciences, University of Illinois Urbana-Champaign, United States
Zhiping Zhang
Zhiping Zhang
Northeastern University
Human-Centered AI PrivacyHuman-AI Collaboration
Tianshi Li
Tianshi Li
Assistant Professor, Northeastern University
Human-Computer InteractionPrivacyHuman-Centered AI Privacy