🤖 AI Summary
This study investigates how generative AI—through predictive modeling and automated decision-making—interferes with individual autonomy across three interrelated dimensions: behavioral performance, situational interaction, and self-association, thereby reconfiguring the relational self. Drawing on phenomenology and relational self-theory, and integrating insights from human–AI interaction research and AI ethics, the paper develops the conceptual framework of the “intercepted self” and a corresponding three-layer analytical model to explicate AI’s deep, processual intervention in self-formation. Findings indicate that generative AI does not merely enhance efficiency; it may systematically erode agential autonomy, blur human–machine boundaries, and challenge self-identity and moral responsibility attribution. The study advances a novel paradigm for understanding co-constituted subjectivity in human–AI assemblages and provides foundational theoretical grounding for delineating the boundaries of self-formation in AI ethics governance.
📝 Abstract
Generative AI is changing our way of interacting with technology, others, and ourselves. Systems such as Microsoft copilot, Gemini and the expected Apple intelligence still awaits our prompt for action. Yet, it is likely that AI assistant systems will only become better at predicting our behaviour and acting on our behalf. Imagine new generations of generative and predictive AI deciding what you might like best at a new restaurant, picking an outfit that increases your chances on your date with a partner also chosen by the same or a similar system. Far from a science fiction scenario, the goal of several research programs is to build systems capable of assisting us in exactly this manner. The prospect urges us to rethink human-technology relations, but it also invites us to question how such systems might change the way we relate to ourselves. Building on our conception of the relational self, we question the possible effects of generative AI with respect to what we call the sphere of externalised output, the contextual sphere and the sphere of self-relating. In this paper, we attempt to deepen the existential considerations accompanying the AI revolution by outlining how generative AI enables the fulfilment of tasks and also increasingly anticipates, i.e. intercepts, our initiatives in these different spheres.