Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the unsettled question under current law of whether providers of generative artificial intelligence systems may incur criminal liability when their models are misused to produce child sexual abuse material (CSAM). Employing the German Criminal Code as an analytical framework, the paper systematically integrates legal interpretation—particularly textual analysis—with the technical characteristics of generative AI across multiple operational scenarios. It clarifies the boundaries of criminal responsibility for developers, researchers, and corporate entities, demonstrating that model providers may indeed be held criminally liable under specific technical implementations and policy conditions. Building on these findings, the work proposes compliance-oriented development guidelines and preventive measures to mitigate legal risk, offering interdisciplinary legal guidance for the governance of AI systems.

Technology Category

Application Category

📝 Abstract
The development of more powerful Generative Artificial Intelligence (GenAI) has expanded its capabilities and the variety of outputs. This has introduced significant legal challenges, including gray areas in various legal systems, such as the assessment of criminal liability for those responsible for these models. Therefore, we conducted a multidisciplinary study utilizing the statutory interpretation of relevant German laws, which, in conjunction with scenarios, provides a perspective on the different properties of GenAI in the context of Child Sexual Abuse Material (CSAM) generation. We found that generating CSAM with GenAI may have criminal and legal consequences not only for the user committing the primary offense but also for individuals responsible for the models, such as independent software developers, researchers, and company representatives. Additionally, the assessment of criminal liability may be affected by contextual and technical factors, including the type of generated image, content moderation policies, and the model's intended purpose. Based on our findings, we discussed the implications for different roles, as well as the requirements when developing such systems.
Problem

Research questions and friction points this paper is trying to address.

Generative Artificial Intelligence
Child Sexual Abuse Material
Criminal Liability
Legal Challenges
AI Providers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Artificial Intelligence
Criminal Liability
Child Sexual Abuse Material
Statutory Interpretation
Content Moderation
🔎 Similar Papers
No similar papers found.
A
Anamaria Mojica-Hanke
University of Passau
T
Thomas Goger
Bavarian Central Office for the Prosecution of Cybercrime
S
Svenja Wölfel
University of Passau
B
Brian Valerius
University of Passau
Steffen Herbold
Steffen Herbold
University of Passau