🤖 AI Summary
Autonomous agents face heightened risks of inappropriate information disclosure due to insufficient context integrity (CI), i.e., the failure to consistently maintain and reason over relevant contextual constraints across interactions.
Method: We propose the first generalizable CI modeling framework that explicitly integrates structured context reasoning with reinforcement learning (RL). Our approach employs LLM-based structured reasoning prompts, a lightweight synthetic data generation strategy (700 samples only), and a customized RL training pipeline.
Contribution/Results: To our knowledge, this is the first work achieving cross-task generalization of CI capabilities. On PrivacyLens—a real-world, human-annotated privacy benchmark—we significantly reduce information leakage rates while preserving task performance. We further demonstrate robustness and transfer effectiveness across diverse model scales (e.g., 7B–70B parameters) and architectures (e.g., decoder-only vs. multimodal LLMs), validating broad applicability and scalability.
📝 Abstract
As the era of autonomous agents making decisions on behalf of users unfolds, ensuring contextual integrity (CI) -- what is the appropriate information to share while carrying out a certain task -- becomes a central question to the field. We posit that CI demands a form of reasoning where the agent needs to reason about the context in which it is operating. To test this, we first prompt LLMs to reason explicitly about CI when deciding what information to disclose. We then extend this approach by developing a reinforcement learning (RL) framework that further instills in models the reasoning necessary to achieve CI. Using a synthetic, automatically created, dataset of only $sim700$ examples but with diverse contexts and information disclosure norms, we show that our method substantially reduces inappropriate information disclosure while maintaining task performance across multiple model sizes and families. Importantly, improvements transfer from this synthetic dataset to established CI benchmarks such as PrivacyLens that has human annotations and evaluates privacy leakage of AI assistants in actions and tool calls.