🤖 AI Summary
This study examines the experiences and redress challenges faced by public figures victimized by non-consensual deepfakes—particularly non-consensual synthetic intimate imagery (NSII). Employing critical discursive psychology and Baumers’ “Usees” theoretical framework, it conducts qualitative analysis of nine public figures’ publicly available testimonies. The findings identify three systemic barriers: victim-blaming discourse, institutional silence, and platform-level redress failure; further, they expose underlying false networked beliefs and commercial logics enabling NSII proliferation. Methodologically, the study innovatively advances the “Usees” concept to retheorize technological victimhood beyond individualized attribution. It demonstrates how human–computer interaction design can be leveraged to improve redress pathways and advocates for value- and cognition-oriented interventions targeting the NSII dissemination ecosystem. The work extends both the theoretical scope of digital violence research and its practical intervention dimensions.
📝 Abstract
Deepfake technology is often used to create non-consensual synthetic intimate imagery (NSII), mainly of celebrity women. Through Critical Discursive Psychological analysis we ask; i) how celebrities construct being targeted by deepfakes and ii) how they navigate infrastructural and social obstacles when seeking recourse. In this paper, we adopt Baumers concept of Usees (stakeholders who are non-consenting, unaware and directly targeted by technology), to understand public statements made by eight celebrity women and one non-binary individual targeted with NSII. Celebrities describe harms of being non-consensually targeted by deepfakes and the distress of becoming aware of these videos. They describe various infrastructural/social factors (e.g. blaming/ silencing narratives and the industry behind deepfake abuse) which hinder activism and recourse. This work has implications in recognizing the roles of various stakeholders in the infrastructures underlying deepfake abuse and the potential of human-computer interaction to improve existing recourses for NSII. We also contribute to understanding how false beliefs online facilitate deepfake abuse. Future work should involve interventions which challenge the values and false beliefs which motivate NSII creation/dissemination.