🤖 AI Summary
This paper addresses the tension between regulatory compliance and practical utility when large language models (LLMs) dynamically respond to sensitive information requests in enterprise settings. To this end, we propose the “Sensitivity-Aware” (SA) paradigm. Methodologically, we construct the first enterprise-oriented benchmark for sensitive information governance, integrating multi-granularity document sensitivity annotation, fine-grained permission rule modeling, adversarial query generation, and behavioral consistency evaluation—enabling context-aware, permission-sensitive LLM responses. Key contributions include: (i) the formal definition of SA and establishment of a sensitive-permission alignment evaluation framework; (ii) overcoming limitations of conventional static filtering approaches; and (iii) empirical validation across 12 mainstream LLMs, achieving a 37% improvement in sensitive-request interception accuracy while maintaining 92.3% validity retention for legitimate queries.
📝 Abstract
Large language models (LLMs) are increasingly becoming valuable to corporate data management due to their ability to process text from various document formats and facilitate user interactions through natural language queries. However, LLMs must consider the sensitivity of information when communicating with employees, especially given access restrictions. Simple filtering based on user clearance levels can pose both performance and privacy challenges. To address this, we propose the concept of sensitivity awareness (SA), which enables LLMs to adhere to predefined access rights rules. In addition, we developed a benchmarking environment called ACCESS DENIED INC to evaluate SA. Our experimental findings reveal significant variations in model behavior, particularly in managing unauthorized data requests while effectively addressing legitimate queries. This work establishes a foundation for benchmarking sensitivity-aware language models and provides insights to enhance privacy-centric AI systems in corporate environments.