Malicious earworms and useful memes, how the far-right surfs on TikTok audio trends

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines how far-right actors exploited TikTok’s audio trend infrastructure to embed xenophobic content within innocuous memes during the 2024 German state elections, thereby evading content moderation. Employing digital ethnography and cross-meme comparative analysis, the authors reverse-trace audio propagation pathways and variant evolution. Findings reveal that TikTok’s “sound infrastructure” constitutes a critical moderation blind spot: hate-laden audio—though demoted in recommendation algorithms—remains discoverable via associated benign hashtags in search results, enabling covert, persistent online visibility. The study makes three key contributions: first, it systematically demonstrates how audio’s inherent ambiguity, mass appeal, and infrastructural embedding jointly amplify extremist dissemination; second, it theorizes the “acoustic covert diffusion” mechanism; and third, it provides empirically grounded insights and a novel theoretical framework for platform governance and counter-extremism policy.

Technology Category

Application Category

📝 Abstract
With its features of remix, TikTok is the designated platform for meme-making and dissemination. Creative combinations of video, emoji, and filters allow for an endless stream of memes and trends animated by sound. The platform has focused its moderation on upholding physical safety, hence investing in the detection of harmful challenges. In response to the DSA, TikTok implemented opt-outs for personalized feeds and features allowing users to report illegal content. At the same time, the platform remains subject to scrutiny. Centering on the role of sound and its intersections with ambiguous memes, the presented research probed right-wing extremist formations relating to the 2024 German state elections. The analysis evidences how the TikTok sound infrastructure affords a sustained presence of xenophobic content, often cloaked through vernacular modes of communication. These cloaking practices benefit from a sound infrastructure that affords the ongoing posting of user-generated sounds that instantly spread through the use-this-sound button. Importantly, these sounds are often not clearly recognizable as networkers of extremist content. Songs that do contain hateful lyrics are not eligible for personalized feeds, however, they remain online where they profit from intersecting with benign meme trends, rendering them visible in search results.
Problem

Research questions and friction points this paper is trying to address.

Examining far-right exploitation of TikTok audio trends for extremist content
Analyzing how xenophobic messages hide in benign meme trends
Assessing TikTok's moderation gaps in detecting cloaked hate speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

TikTok sound infrastructure enables extremist content cloaking
Moderation focuses on harmful challenges detection
Opt-outs and reporting features comply with DSA
🔎 Similar Papers
No similar papers found.