Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The misuse of text-to-image (T2I) models to generate non-consensual intimate imagery (NCII) poses a severe societal risk. Method: We conduct the first systematic, quantitative assessment of publicly available deepfake model variants on major platforms (Hugging Face, Civitai), analyzing their accessibility, metadata, and compliance with platform policies; we further evaluate the feasibility of NCII generation using LoRA-finetuned Stable Diffusion and Flux models, requiring only 20 images, consumer-grade hardware (24GB GPU), and 15 minutes of training. Contribution/Results: We identify nearly 35,000 deepfake model variants—over 15 million total downloads—with 96% explicitly targeting women and many openly advertising NCII generation capabilities in violation of platform content policies. Our empirical analysis exposes critical gaps in regulatory oversight and technical safeguards, providing foundational evidence to inform model access control, distribution auditing, and content provenance mechanisms.

Technology Category

Application Category

📝 Abstract
Advances in multimodal machine learning have made text-to-image (T2I) models increasingly accessible and popular. However, T2I models introduce risks such as the generation of non-consensual depictions of identifiable individuals, otherwise known as deepfakes. This paper presents an empirical study exploring the accessibility of deepfake model variants online. Through a metadata analysis of thousands of publicly downloadable model variants on two popular repositories, Hugging Face and Civitai, we demonstrate a huge rise in easily accessible deepfake models. Almost 35,000 examples of publicly downloadable deepfake model variants are identified, primarily hosted on Civitai. These deepfake models have been downloaded almost 15 million times since November 2022, with the models targeting a range of individuals from global celebrities to Instagram users with under 10,000 followers. Both Stable Diffusion and Flux models are used for the creation of deepfake models, with 96% of these targeting women and many signalling intent to generate non-consensual intimate imagery (NCII). Deepfake model variants are often created via the parameter-efficient fine-tuning technique known as low rank adaptation (LoRA), requiring as few as 20 images, 24GB VRAM, and 15 minutes of time, making this process widely accessible via consumer-grade computers. Despite these models violating the Terms of Service of hosting platforms, and regulation seeking to prevent dissemination, these results emphasise the pressing need for greater action to be taken against the creation of deepfakes and NCII.
Problem

Research questions and friction points this paper is trying to address.

Rising accessibility of non-consensual deepfake image generators online
Widespread use of deepfake models targeting women and creating NCII
Need for stronger action against deepfake creation and dissemination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LoRA for efficient deepfake model fine-tuning
Analyzes metadata from Hugging Face and Civitai
Focuses on non-consensual intimate imagery generation
🔎 Similar Papers
No similar papers found.