🤖 AI Summary
This study investigates how bounty mechanisms on community-driven AI-generated content platforms are exploited to incentivize the creation of inappropriate material—such as adult content and deepfakes—thereby perpetuating gendered social harms, violating consent norms, and exposing systemic governance failures. Through a longitudinal analysis of all publicly posted bounty requests on the Civitai platform over its first 14 months, combined with large-scale data mining, content classification, and policy review, this work provides the first systematic quantification of the association between bounty systems and sensitive content. Findings reveal that “Not Safe For Work” (NSFW) content dominates and is steadily increasing; approximately 20% of users generate nearly half of all requests; and deepfake-related bounties are disproportionately targeted at female celebrities. Despite explicit violations of platform policies, such requests remain pervasive, highlighting critical regulatory gaps and structural risks.
📝 Abstract
Generative AI systems increasingly enable the production of highly realistic synthetic media. Civitai, a popular community-driven platform for AI-generated content, operates a monetized feature called Bounties, which allows users to commission the generation of content in exchange for payment. To examine how this mechanism is used and what content it incentivizes, we conduct a longitudinal analysis of all publicly available bounty requests collected over a 14-month period following the platform's launch. We find that the bounty marketplace is dominated by tools that let users steer AI models toward content they were not trained to generate. At the same time, requests for content that is"Not Safe For Work"are widespread and have increased steadily over time, now comprising a majority of all bounties. Participation in bounty creation is uneven, with 20% of requesters accounting for roughly half of requests. Requests for"deepfake"- media depicting identifiable real individuals - exhibit a higher concentration than other types of bounties. A nontrivial subset of these requests involves explicit deepfakes despite platform policies prohibiting such content. These bounties disproportionately target female celebrities, revealing a pronounced gender asymmetry in social harm. Together, these findings show how monetized, community-driven generative AI platforms can produce gendered harms, raising questions about consent, governance, and enforcement.