Generative Propaganda

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the operational logic of “generative propaganda” — the strategic deployment of generative AI in real-world information environments — challenging the dominant paradigm that prioritizes deepfakes as the primary threat. Method: Drawing on 32 in-depth interviews and socio-technical analysis across Taiwan and India, the research examines actual usage contexts and influence mechanisms. Contribution/Results: It introduces the first intent- and visibility-based two-dimensional taxonomy (explicit/implicit × propagandistic/deprecative), foregrounding persuasive efficacy over deceptive fidelity. Findings reveal that AI is predominantly leveraged for cross-lingual scalable dissemination, rapid multimodal content generation, and censorship circumvention—not for fabricating epistemic trust. Its principal value lies in enhancing operational efficiency and resilience. The study advances global AI propaganda threat modeling in security research and provides empirically grounded foundations for policy interventions and technical countermeasures.

Technology Category

Application Category

📝 Abstract
Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term"deepfakes", we find, exerts outsized discursive power in shaping defenders'expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.
Problem

Research questions and friction points this paper is trying to address.

Characterizing generative AI propaganda use in Taiwan and India contexts
Developing taxonomy distinguishing obvious versus hidden propaganda methods
Analyzing persuasion versus deception in AI-generated content strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducted interviews with creators and defenders
Developed taxonomy distinguishing obvious versus hidden uses
Focused on persuasion efficiency over deception tactics
🔎 Similar Papers
No similar papers found.