Exploring Families' Use and Mediation of Generative AI: A Multi-User Perspective

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates risks posed by generative AI (e.g., ChatGPT) to children under 16 in domestic settings, revealing widespread deficiencies in age-appropriate safeguards and parental control features across current platforms. Drawing on family mediatisation theory, we conducted qualitative research via semi-structured interviews with 12 families. We identified five distinct parent–child co-use patterns and four categories of parental intervention strategies. Building on these findings, we propose the first “Generative AI Family Mediatisation Intervention Refinement Model,” demonstrating that parental strategies are constrained by three interrelated factors: technical literacy, household digital competence, and platform design limitations. Our key design recommendations include integrating age-adaptive content filtering mechanisms and configurable, collaborative parent–child control interfaces. These evidence-based proposals advance the development of family-centred, ethically grounded generative AI systems.

Technology Category

Application Category

📝 Abstract
Applications of Generative AI (GenAI), such as ChatGPT, have gained popularity among the public due to their ease of access, use, and support of educational and creative activities. Despite these benefits, GenAI poses unique risks for families, such as lacking sufficient safeguards tailored to protect children under 16 years of age and not offering parental control features. This study explores families' use and co-use of GenAI, the perceived risks and opportunities of ChatGPT, and how parents mediate their children's use of GenAI. Through semi-structured interviews with 12 families, we identified ways families used and mediated GenAI and factors that influenced parents' GenAI mediation strategies. We contextualize our findings with a modified model of family mediation strategies, drawing from previous family media and mediation frameworks. We provide insights for future research on family-GenAI interactions and highlight the need for more robust protective measures on GenAI platforms for families.
Problem

Research questions and friction points this paper is trying to address.

Examining family use and co-use of Generative AI tools like ChatGPT
Assessing risks and opportunities of ChatGPT for children under 16
Investigating parental mediation strategies for children's GenAI usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Studied family GenAI use via interviews
Modified family mediation strategies model
Highlighted need for GenAI safeguards
🔎 Similar Papers
No similar papers found.