🤖 AI Summary
This paper critically examines how anthropomorphic AI—such as chatbots, virtual assistants, and generative models—systematically reconfigure user cognition and trust mechanisms by simulating affective authenticity, thereby reinforcing the logic of surveillance capitalism. Employing theoretical analysis and critical technology studies, it advances the concept of “anthropomorphism as cognitive infrastructure,” integrating Nicholas Carr’s ethics of intellect, media philosophy, and critiques of surveillance capitalism to expose AI’s deep interventions in habitual thinking, self-understanding, and behavioral predictability. The study innovatively reframes affective AI not merely as an interactional tool but as a cognitive apparatus of normalization and discipline. Its key contribution lies in proposing a dual response: (1) a design ethics oriented toward cognitive resistance—foregrounding epistemic sovereignty and critical agency—and (2) institutionalized regulatory frameworks for AI development. By foregrounding the cognitive-political dimensions of AI, the paper offers a novel, interdisciplinary perspective for AI ethics research.
📝 Abstract
In this paper, we argue that anthropomorphized technology, designed to simulate emotional realism, are not neutral tools but cognitive infrastructures that manipulate user trust and behaviour. This reinforces the logic of surveillance capitalism, an under-regulated economic system that profits from behavioural manipulation and monitoring. Drawing on Nicholas Carr's theory of the intellectual ethic, we identify how technologies such as chatbots, virtual assistants, or generative models reshape not only what we think about ourselves and our world, but how we think at the cognitive level. We identify how the emerging intellectual ethic of AI benefits a system of surveillance capitalism, and discuss the potential ways of addressing this.