AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure

๐Ÿ“… 2024-09-26
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the challenge of balancing privacy preservation with necessary self-disclosure for LLM-based agents in social interactions. We propose the first privacy-aware strategic self-disclosure mechanism, which dynamically modulates information granularity based on relational intimacy and task type. The mechanism integrates empirical user preference analysis, context-aware prompt engineering, controllable text generation, and a lightweight privacy risk assessment module. Experimental results across diverse social scenarios demonstrate that our approach maintains over 92% task success rate and interpersonal trustworthiness while reducing excessive disclosure of sensitive information by 63%โ€”significantly outperforming baseline methods. Our core contribution lies in formalizing privacy decision-making as an interpretable, tunable strategic process, establishing a novel paradigm for designing trustworthy AI agents.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users, assisting them with a wide range of tasks through conversational interfaces. Despite their advantages, concerns arise regarding the potential risk of privacy leaks, particularly in scenarios involving social interactions. While existing research has focused on protecting privacy by limiting the access of AI delegates to sensitive user information, many social scenarios require disclosing private details to achieve desired outcomes, necessitating a balance between privacy protection and disclosure. To address this challenge, we conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios, and then propose a novel AI delegate system that enables privacy-conscious self-disclosure. Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy protection and strategic self-disclosure in AI delegates
Addressing privacy leaks in LLM-based AI social interactions
Developing privacy-conscious AI delegates for dynamic social scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI delegates balance privacy and self-disclosure
User study informs privacy-conscious AI design
Strategic privacy protection in social interactions