🤖 AI Summary
This study addresses rights infringement arising from generative AI’s unauthorized use of creative works—spanning visual art, writing, and programming—centering on three core deficits: lack of informed consent, absence of attribution rights, and nonexistence of equitable compensation mechanisms. Through 20 cross-domain in-depth interviews and thematic coding analysis, it systematically identifies creative professionals’ authentic governance expectations for the first time, revealing six critical gaps between current AI governance practices and creators’ normative demands. The study proposes an innovative, tiered governance framework anchored in “consent–attribution–compensation,” emphasizing creator agency and context-sensitive ethical alignment. Its contributions include actionable policy recommendations and industry implementation guidelines, grounded in empirical evidence and theoretical rigor. By foregrounding creative sovereignty, the framework advances both normative foundations and practical pathways for ethically grounded, rights-respecting AI training protocols.
📝 Abstract
Since the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers' creative output without their consent and without giving credit or compensation to the original creators. This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers' consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.