Privacy Reasoning in Ambiguous Contexts

πŸ“… 2025-06-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses a critical limitation of large language models (LLMs) in privacy-sensitive information-sharing decisions: inference bias arising from ambiguous or missing contextual cues. We identify contextual ambiguity as a fundamental bottleneck in privacy assessmentβ€”a finding established for the first time through systematic analysis. To mitigate this, we propose Camber, a dynamic disambiguation framework that leverages prompt engineering to elicit self-generated reasoning traces from the model, integrating chain-of-thought (CoT) prompting with context reconstruction to explicitly surface and resolve ambiguities. Evaluated on standard benchmarks, Camber improves accuracy in privacy disclosure judgment by 13.3% (precision) and 22.3% (recall), while substantially reducing sensitivity to prompt template variations. The approach enhances both the robustness and interpretability of LLM-based privacy decisions, offering a principled solution to context-dependent reasoning failures in sensitive domains.

Technology Category

Application Category

πŸ“ Abstract
We study the ability of language models to reason about appropriate information disclosure - a central aspect of the evolving field of agentic privacy. Whereas previous works have focused on evaluating a model's ability to align with human decisions, we examine the role of ambiguity and missing context on model performance when making information-sharing decisions. We identify context ambiguity as a crucial barrier for high performance in privacy assessments. By designing Camber, a framework for context disambiguation, we show that model-generated decision rationales can reveal ambiguities and that systematically disambiguating context based on these rationales leads to significant accuracy improvements (up to 13.3% in precision and up to 22.3% in recall) as well as reductions in prompt sensitivity. Overall, our results indicate that approaches for context disambiguation are a promising way forward to enhance agentic privacy reasoning.
Problem

Research questions and friction points this paper is trying to address.

Study language models' ability to reason about information disclosure
Examine ambiguity's impact on model performance in privacy decisions
Propose context disambiguation to improve privacy assessment accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses model-generated rationales for ambiguity detection
Introduces Camber for systematic context disambiguation
Improves privacy decision accuracy via disambiguation
πŸ”Ž Similar Papers
No similar papers found.