"What do you expect? You're part of the internet": Analyzing Celebrities' Experiences as Usees of Deepfake Technology

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines the experiences and redress challenges faced by public figures victimized by non-consensual deepfakes—particularly non-consensual synthetic intimate imagery (NSII). Employing critical discursive psychology and Baumers’ “Usees” theoretical framework, it conducts qualitative analysis of nine public figures’ publicly available testimonies. The findings identify three systemic barriers: victim-blaming discourse, institutional silence, and platform-level redress failure; further, they expose underlying false networked beliefs and commercial logics enabling NSII proliferation. Methodologically, the study innovatively advances the “Usees” concept to retheorize technological victimhood beyond individualized attribution. It demonstrates how human–computer interaction design can be leveraged to improve redress pathways and advocates for value- and cognition-oriented interventions targeting the NSII dissemination ecosystem. The work extends both the theoretical scope of digital violence research and its practical intervention dimensions.

Technology Category

Application Category

📝 Abstract
Deepfake technology is often used to create non-consensual synthetic intimate imagery (NSII), mainly of celebrity women. Through Critical Discursive Psychological analysis we ask; i) how celebrities construct being targeted by deepfakes and ii) how they navigate infrastructural and social obstacles when seeking recourse. In this paper, we adopt Baumers concept of Usees (stakeholders who are non-consenting, unaware and directly targeted by technology), to understand public statements made by eight celebrity women and one non-binary individual targeted with NSII. Celebrities describe harms of being non-consensually targeted by deepfakes and the distress of becoming aware of these videos. They describe various infrastructural/social factors (e.g. blaming/ silencing narratives and the industry behind deepfake abuse) which hinder activism and recourse. This work has implications in recognizing the roles of various stakeholders in the infrastructures underlying deepfake abuse and the potential of human-computer interaction to improve existing recourses for NSII. We also contribute to understanding how false beliefs online facilitate deepfake abuse. Future work should involve interventions which challenge the values and false beliefs which motivate NSII creation/dissemination.
Problem

Research questions and friction points this paper is trying to address.

Analyzing celebrities' experiences as non-consenting targets of deepfake abuse
Exploring infrastructural and social obstacles to seeking recourse for NSII victims
Investigating false online beliefs facilitating deepfake creation and dissemination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Critical Discursive Psychological analysis of deepfake impacts
Baumers Usees concept for non-consenting targets
Human-computer interaction to improve NSII recourse
🔎 Similar Papers
No similar papers found.
John Twomey
John Twomey
School of Applied Psychology, University College Cork, Cork, Ireland
S
Sarah Foley
School of Applied Psychology, University College Cork, Cork, Ireland
S
Sarah Robinson
School of Applied Psychology, University College Cork, Cork, Ireland
Michael Quayle
Michael Quayle
University of Limerick & University of KwaZulu-Natal
social psychologysocial identity networksattitude networkssocial identitypolarization
M
Matthew Peter Aylett
Mathematics and Computer Science, Heriot Watt University, Edinburgh, United Kingdom; CereProc Ltd., Edinburgh, United Kingdom
Conor Linehan
Conor Linehan
School of Applied Psychology, University College Cork, Cork, Ireland
Gillian Murphy
Gillian Murphy
School of Applied Psychology, University College Cork, Cork, Ireland