Position: Contextual Integrity Washing for Language Models

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Widespread misuse of Contextual Integrity (CI) theory in LLM privacy evaluation has led to “CI-washing”—superficial invocation of CI while violating its four foundational principles, resulting in distorted risk assessments and ineffective privacy safeguards. Method: This work systematically delineates CI’s applicability boundaries in the LLM context, identifying long-overlooked methodological flaws such as prompt sensitivity and positional bias; it proposes a CI-compliance assessment framework integrating theoretical analysis, normative reconstruction, and empirical diagnostics—including prompt robustness and positional bias testing. Contribution/Results: We establish four non-negotiable CI guardrails and introduce a socio-technical alignment standard for rigorous, principle-grounded evaluation—shifting LLM privacy governance from formal compliance toward substantive adherence to CI’s ethical and structural tenets.

Technology Category

Application Category

📝 Abstract
Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI theory emphasizes sharing information in accordance with privacy norms and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. This position paper argues that existing literature adopts CI for LLMs without embracing the theory's fundamental tenets, essentially amounting to a form of"CI-washing."CI-washing could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).
Problem

Research questions and friction points this paper is trying to address.

Privacy Leakage
Context Integrity
Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context Integrity
Large Language Models
Privacy Protection Evaluation