π€ AI Summary
This study addresses a critical limitation of large language models (LLMs) in privacy-sensitive information-sharing decisions: inference bias arising from ambiguous or missing contextual cues. We identify contextual ambiguity as a fundamental bottleneck in privacy assessmentβa finding established for the first time through systematic analysis. To mitigate this, we propose Camber, a dynamic disambiguation framework that leverages prompt engineering to elicit self-generated reasoning traces from the model, integrating chain-of-thought (CoT) prompting with context reconstruction to explicitly surface and resolve ambiguities. Evaluated on standard benchmarks, Camber improves accuracy in privacy disclosure judgment by 13.3% (precision) and 22.3% (recall), while substantially reducing sensitivity to prompt template variations. The approach enhances both the robustness and interpretability of LLM-based privacy decisions, offering a principled solution to context-dependent reasoning failures in sensitive domains.
π Abstract
We study the ability of language models to reason about appropriate information disclosure - a central aspect of the evolving field of agentic privacy. Whereas previous works have focused on evaluating a model's ability to align with human decisions, we examine the role of ambiguity and missing context on model performance when making information-sharing decisions. We identify context ambiguity as a crucial barrier for high performance in privacy assessments. By designing Camber, a framework for context disambiguation, we show that model-generated decision rationales can reveal ambiguities and that systematically disambiguating context based on these rationales leads to significant accuracy improvements (up to 13.3% in precision and up to 22.3% in recall) as well as reductions in prompt sensitivity. Overall, our results indicate that approaches for context disambiguation are a promising way forward to enhance agentic privacy reasoning.