🤖 AI Summary
This study addresses the authenticity crisis of generative AI imagery and the EU AI Act’s regulatory requirement for robust, invisible, and verifiable watermarking. Method: We conduct the first empirical, interdisciplinary audit integrating technical evaluation and legal interpretation—systematically testing watermark implementations across 50 mainstream AI image generation systems and aligning findings with the AI Act’s provisions to classify applicable systems and their distinct compliance obligations. Contribution/Results: We identify four categories of regulated systems and find that only a negligible minority meet the Act’s stringent watermarking criteria; significant gaps persist between industry practice and legal requirements. To bridge this divide, we propose a novel “technology–law” dual-dimensional assessment framework, offering a reproducible methodology and actionable pathway for AI content provenance governance and regulatory enforcement.
📝 Abstract
AI-generated images have become so good in recent years that individuals cannot distinguish them any more from"real"images. This development creates a series of societal risks, and challenges our perception of what is true and what is not, particularly with the emergence of"deep fakes"that impersonate real individuals. Watermarking, a technique that involves embedding identifying information within images to indicate their AI-generated nature, has emerged as a primary mechanism to address the risks posed by AI-generated images. The implementation of watermarking techniques is now becoming a legal requirement in many jurisdictions, including under the new 2024 EU AI Act. Despite the widespread use of AI image generation systems, the current status of watermarking implementation remains largely unexamined. Moreover, the practical implications of the AI Act's watermarking requirements have not previously been studied. The present paper therefore both provides an empirical analysis of 50 of the most widely used AI systems for image generation, and embeds this empirical analysis into a legal analysis of the AI Act. We identify four categories of generative AI image systems relevant under the AI Act, outline the legal obligations for each category, and find that only a minority number of providers currently implement adequate watermarking practices.