AgentRaft: Automated Detection of Data Over-Exposure in LLM Agents

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent issue of data over-exposure (DOE)—the unnecessary leakage of sensitive information beyond functional requirements—during cross-tool invocations by large language model (LLM) agents. To tackle this, the authors propose AgentRaft, the first framework to systematically define and automatically detect DOE. AgentRaft constructs cross-tool function call graphs, generates targeted synthetic prompts, and integrates runtime taint tracking with a multi-LLM voting mechanism grounded in GDPR, CCPA, and PIPL compliance standards to enable high-coverage, low-cost privacy validation. Evaluation across 6,675 real-world agent tools reveals that 57.07% exhibit DOE risks. Compared to baseline methods, AgentRaft improves detection accuracy by 87.24%, achieves 99% coverage with only 150 prompts, and reduces validation costs by 88.6%.

Technology Category

Application Category

📝 Abstract
The rapid integration of Large Language Model (LLM) agents into autonomous task execution has introduced significant privacy concerns within cross-tool data flows. In this paper, we systematically investigate and define a novel risk termed Data Over-Exposure (DOE) in LLM Agent, where an Agent inadvertently transmits sensitive data beyond the scope of user intent and functional necessity. We identify that DOE is primarily driven by the broad data paradigms in tool design and the coarse-grained data processing inherent in LLMs. In this paper, we present AgentRaft, the first automated framework for detecting DOE risks in LLM agents. AgentRaft combines program analysis with semantic reasoning through three synergistic modules: (1) it constructs a Cross-Tool Function Call Graph (FCG) to model the interaction landscape of heterogeneous tools; (2) it traverses the FCG to synthesize high-quality testing user prompts that act as deterministic triggers for deep-layer tool execution; and (3) it performs runtime taint tracking and employs a multi-LLM voting committee grounded in global privacy regulations (e.g., GDPR, CCPA, PIPL) to accurately identify privacy violations. We evaluate AgentRaft on a testing environment of 6,675 real-world agent tools. Our findings reveal that DOE is indeed a systemic risk, prevalent in 57.07% of potential tool interaction paths. AgentRaft achieves a high detection accuracy and effectiveness, outperforming baselines by 87.24%. Furthermore, AgentRaft reaches near-total DOE coverage (99%) within only 150 prompts while reducing per-chain verification costs by 88.6%. Our work provides a practical foundation for building auditable and privacy-compliant LLM agent systems.
Problem

Research questions and friction points this paper is trying to address.

Data Over-Exposure
LLM Agents
Privacy Risk
Cross-Tool Data Flow
Sensitive Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data Over-Exposure
LLM Agents
Automated Privacy Detection
Cross-Tool Function Call Graph
Multi-LLM Voting
🔎 Similar Papers
No similar papers found.