🤖 AI Summary
This study addresses the growing incidence of AI-generated non-consensual intimate imagery (AIG-NCII) in K–12 schools, where educators often lack adequate policies, training, and support to respond effectively. Through qualitative interviews with 20 U.S. education professionals, it reveals their dual vulnerability as both potential victims and frontline responders to AIG-NCII. Thematic analysis highlights critical gaps in legal awareness, AI literacy, and institutional resources within school settings. Situated within the broader discourse on AI ethics in education, the research proposes a multi-stakeholder intervention framework to inform the development of interactive educational tools, curriculum design, and school policies. It underscores the urgent need for systemic support mechanisms to address this emerging digital harm.
📝 Abstract
AI-generated non-consensual intimate imagery (AIG-NCII) is an emerging social problem due to the advancement of AI tools. While recent incidents in middle and high schools have highlighted the urgency of this issue, there is limited understanding of what concrete supports schools need to effectively address AIG-NCII. To fill this gap, we conducted an interview study with 20 educators in the U.S. and investigated their attitudes, experiences, and practices related to AIG-NCII. Educators expressed concerns about both students' and their own vulnerability, as AIG-NCII may cause moral decline among students, while educators themselves could become victims. Nevertheless, existing practices in schools are limited, and they lack both training and systematic policies. Challenges such as a lack of resources, unclear legal boundaries, and limited knowledge of AI make implementation difficult. The findings of this paper contribute to interactive educational tool design, curriculum design, and policy-making, especially regarding the need for multi-stakeholder strategies to address issues surrounding AIG-NCII.