🤖 AI Summary
This study addresses the critical challenge of enabling AI agents to recognize individual, dynamic, and context-dependent privacy boundaries. We propose an AI-driven discriminative experimental paradigm integrating a between-subjects design, multi-context modeling, and quantitative behavioral analysis, collecting 1,681 privacy-boundary judgments from 169 participants across 61 ecologically valid scenarios. Innovatively, we embed privacy preferences directly into real-time data streams, formalizing fine-grained, computationally tractable privacy boundaries as a novel alignment objective for AI systems. Results demonstrate that AI delegation significantly increases individual privacy sensitivity while reducing inter-participant consensus; moreover, communication role (e.g., notifier vs. requester) exerts a main effect on acceptable disclosure levels. This work establishes both a theoretical framework and empirical foundation for developing privacy-aware AI systems that respect individual heterogeneity.
📝 Abstract
Aligning AI systems with human privacy preferences requires understanding individuals' nuanced disclosure behaviors beyond general norms. Yet eliciting such boundaries remains challenging due to the context-dependent nature of privacy decisions and the complex trade-offs involved. We present an AI-powered elicitation approach that probes individuals' privacy boundaries through a discriminative task. We conducted a between-subjects study that systematically varied communication roles and delegation conditions, resulting in 1,681 boundary specifications from 169 participants for 61 scenarios. We examined how these contextual factors and individual differences influence the boundary specification. Quantitative results show that communication roles influence individuals' acceptance of detailed and identifiable disclosure, AI delegation and individuals' need for privacy heighten sensitivity to disclosed identifiers, and AI delegation results in less consensus across individuals. Our findings highlight the importance of situating privacy preference elicitation within real-world data flows. We advocate using nuanced privacy boundaries as an alignment goal for future AI systems.