🤖 AI Summary
Current privacy regulations primarily focus on data collection and are ill-equipped to address the privacy and equity imbalances arising from AI-generated user “Blind Selves”—inferences about individuals that remain hidden from them. This work proposes a novel privacy paradigm that shifts from passive defense to active empowerment by introducing the Johari Window psychological model into the AI privacy domain for the first time. Integrating contextual integrity theory, AI inference mechanisms, and human-AI interaction design, the study develops a personal advocacy agent system capable of enforcing social norms. This system enables users to identify, leverage, or suppress implicit AI inferences about themselves, transforming previously invisible inferences into actionable informational assets. By doing so, it effectively curbs the uncontrolled expansion of the Blind Self and realizes a transition in privacy rights—from mere protection to meaningful empowerment.
📝 Abstract
Most privacy regulations function as a passive defensive shield that users must wield themselves. Users are incessantly asked to"opt-in"or"opt-out"of data collection, forced to make defensive decisions whose consequences are increasingly difficult to predict. Viewed through the Johari Window, a psychological framework of self-awareness based on what is known and unknown to self and others, current policies require users to manage the Open Self and shield the Hidden Self through notice and consent. However, as organizations increasingly use AI to make inferences, the rapid expansion of Blind Self, attributes known to algorithms but unknown to the user, emerges as a critical challenge. We illustrate how current regulations fall short because they focus on data collection rather than inference and leave this blind spot unguarded. Building on the theory of Contextual Integrity, we propose a paradigm shift from defensive privacy management to proactive privacy advocacy. We argue for the necessity of personal advocacy agents capable of operationalizing social norms to harness the power of AI inference. By illuminating the hidden inferences that users can strategically leverage or suppress, these agents not only restrain the growth of Blind Self but also mine it for value. By transforming the Unknown Self into a personal asset for users, we can foster a flow of personal information that is equitable, transparent, and individually beneficial in the age of AI.