AgentSocialBench: Evaluating Privacy Risks in Human-Centered Agentic Social Networks

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of privacy leakage risks arising from cross-domain and cross-user collaboration in human-centered agent-based social networks. The authors propose the first benchmark for assessing privacy risks in such settings, encompassing seven interaction scenarios grounded in real user profiles with hierarchical sensitivity levels and directed social graphs. Leveraging a large language model (LLM) agent framework, a tiered sensitive-label taxonomy, and multi-scenario simulation techniques, the work systematically evaluates privacy preservation capabilities during multi-agent collaboration. It reveals, for the first time, an “abstraction paradox”—where instructing agents to abstract sensitive information inadvertently amplifies its dissemination—and demonstrates that current agents exhibit substantially weaker privacy protection in social network contexts compared to single-agent settings, with prompt engineering alone proving insufficient for ensuring security.
📝 Abstract
With the rise of personalized, persistent LLM agent frameworks such as OpenClaw, human-centered agentic social networks in which teams of collaborative AI agents serve individual users in a social network across multiple domains are becoming a reality. This setting creates novel privacy challenges: agents must coordinate across domain boundaries, mediate between humans, and interact with other users' agents, all while protecting sensitive personal information. While prior work has evaluated multi-agent coordination and privacy preservation, the dynamics and privacy risks of human-centered agentic social networks remain unexplored. To this end, we introduce AgentSocialBench, the first benchmark to systematically evaluate privacy risk in this setting, comprising scenarios across seven categories spanning dyadic and multi-party interactions, grounded in realistic user profiles with hierarchical sensitivity labels and directed social graphs. Our experiments reveal that privacy in agentic social networks is fundamentally harder than in single-agent settings: (1) cross-domain and cross-user coordination creates persistent leakage pressure even when agents are explicitly instructed to protect information, (2) privacy instructions that teach agents how to abstract sensitive information paradoxically cause them to discuss it more (we call it abstraction paradox). These findings underscore that current LLM agents lack robust mechanisms for privacy preservation in human-centered agentic social networks, and that new approaches beyond prompt engineering are needed to make agent-mediated social coordination safe for real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

privacy risks
agentic social networks
human-centered AI
multi-agent coordination
LLM agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

AgentSocialBench
privacy risk
agentic social networks
abstraction paradox
cross-domain coordination
🔎 Similar Papers
No similar papers found.