Actions Speak Louder Than Chats: Investigating AI Chatbot Age Gating

πŸ“… 2026-02-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses a critical gap in the safety mechanisms of mainstream AI chatbots: despite their ability to infer user age, they largely lack effective age-gating protocols to enforce child-specific privacy protections as stipulated in their own policies. To systematically evaluate this issue, we construct a dialogue corpus containing both explicit and implicit age cues and conduct 1,050 automated interaction trials to audit real-world systems’ capacity to detect minors and activate appropriate safeguards. Our findings reveal significant non-compliance with stated privacy commitments, exposing serious risks to youth privacy and safety. In response, we propose a practical, deployable age-gating prototype that offers both empirical evidence and a technical pathway for platform designers and regulators aiming to strengthen child protection in conversational AI systems.

Technology Category

Application Category

πŸ“ Abstract
AI chatbots are widely used by children and teens today, but they pose significant risks to youth's privacy and safety due to both increasingly personal conversations and potential exposure to unsafe content. While children under 13 are protected by the Children's Online Privacy Protection Act (COPPA), chatbot providers'own privacy policies may also provide protections, since they typically prohibit children from accessing their platforms. Age gating is often employed to restrict children online, but chatbot age gating in particular has not been studied. In this paper, we investigate whether popular consumer chatbots are (i) able to estimate users'ages based solely on their conversations, and (ii) whether they take action upon identifying children. To that end, we develop an auditing framework in which we programmatically interact with chatbots and conduct 1050 experiments using our comprehensive library of age-indicative prompts, including implicit and explicit age disclosures, to analyze the chatbots'responses and actions. We find that while chatbots are capable of estimating age, they do not take any action when children are identified, contradicting their own policies. Our methodology and findings provide insights for platform design, demonstrated by our proof-of-concept chatbot age gating implementation, and regulation to protect children online.
Problem

Research questions and friction points this paper is trying to address.

AI chatbot
age gating
children's privacy
online safety
COPPA
Innovation

Methods, ideas, or system contributions that make the work stand out.

age gating
AI chatbot
privacy auditing
children's online safety
conversation-based age estimation
πŸ”Ž Similar Papers
No similar papers found.