🤖 AI Summary
Current privacy assessments of AI agents are largely confined to input-output boundaries, overlooking the privacy risks inherent in intermediate information flows during task execution. This work proposes a Privacy Flow Graph framework grounded in contextual integrity theory, decomposing agent workflows into discrete information flow stages to enable fine-grained, traceable privacy compliance evaluations at each step. For the first time, this approach systematically extends privacy assessment across the entire operational pipeline. Evaluated across 62 cross-domain scenarios, the method reveals that over 80% exhibit intra-process violations, with 24% of cases producing outputs that appear compliant despite underlying breaches—demonstrating that conventional output-level assessments substantially underestimate privacy risks.
📝 Abstract
Agentic systems are increasingly acting on users'behalf, accessing calendars, email, and personal files to complete everyday tasks. Privacy evaluation for these systems has focused on the input and output boundaries, but each task involves several intermediate information flows, from agent queries to tool responses, that are not currently evaluated. We argue that every boundary in an agentic pipeline is a site of potential privacy violation and must be assessed independently. To support this, we introduce the Privacy Flow Graph, a Contextual Integrity-grounded framework that decomposes agentic execution into a sequence of information flows, each annotated with the five CI parameters, and traces violations to their point of origin. We present AgentSCOPE, a benchmark of 62 multi-tool scenarios across eight regulatory domains with ground truth at every pipeline stage. Our evaluation across seven state-of-the-art LLMs show that privacy violations in the pipeline occur in over 80% of scenarios, even when final outputs appear clean (24%), with most violations arising at the tool-response stage where APIs return sensitive data indiscriminately. These results indicate that output-level evaluation alone substantially underestimates the privacy risk of agentic systems.