🤖 AI Summary
This study addresses the unsettled question under current law of whether providers of generative artificial intelligence systems may incur criminal liability when their models are misused to produce child sexual abuse material (CSAM). Employing the German Criminal Code as an analytical framework, the paper systematically integrates legal interpretation—particularly textual analysis—with the technical characteristics of generative AI across multiple operational scenarios. It clarifies the boundaries of criminal responsibility for developers, researchers, and corporate entities, demonstrating that model providers may indeed be held criminally liable under specific technical implementations and policy conditions. Building on these findings, the work proposes compliance-oriented development guidelines and preventive measures to mitigate legal risk, offering interdisciplinary legal guidance for the governance of AI systems.
📝 Abstract
The development of more powerful Generative Artificial Intelligence (GenAI) has expanded its capabilities and the variety of outputs. This has introduced significant legal challenges, including gray areas in various legal systems, such as the assessment of criminal liability for those responsible for these models. Therefore, we conducted a multidisciplinary study utilizing the statutory interpretation of relevant German laws, which, in conjunction with scenarios, provides a perspective on the different properties of GenAI in the context of Child Sexual Abuse Material (CSAM) generation. We found that generating CSAM with GenAI may have criminal and legal consequences not only for the user committing the primary offense but also for individuals responsible for the models, such as independent software developers, researchers, and company representatives. Additionally, the assessment of criminal liability may be affected by contextual and technical factors, including the type of generated image, content moderation policies, and the model's intended purpose. Based on our findings, we discussed the implications for different roles, as well as the requirements when developing such systems.