๐ค AI Summary
This study addresses the challenge of balancing privacy preservation with necessary self-disclosure for LLM-based agents in social interactions. We propose the first privacy-aware strategic self-disclosure mechanism, which dynamically modulates information granularity based on relational intimacy and task type. The mechanism integrates empirical user preference analysis, context-aware prompt engineering, controllable text generation, and a lightweight privacy risk assessment module. Experimental results across diverse social scenarios demonstrate that our approach maintains over 92% task success rate and interpersonal trustworthiness while reducing excessive disclosure of sensitive information by 63%โsignificantly outperforming baseline methods. Our core contribution lies in formalizing privacy decision-making as an interpretable, tunable strategic process, establishing a novel paradigm for designing trustworthy AI agents.
๐ Abstract
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users, assisting them with a wide range of tasks through conversational interfaces. Despite their advantages, concerns arise regarding the potential risk of privacy leaks, particularly in scenarios involving social interactions. While existing research has focused on protecting privacy by limiting the access of AI delegates to sensitive user information, many social scenarios require disclosing private details to achieve desired outcomes, necessitating a balance between privacy protection and disclosure. To address this challenge, we conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios, and then propose a novel AI delegate system that enables privacy-conscious self-disclosure. Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.