π€ AI Summary
This study addresses a critical gap in the safety mechanisms of mainstream AI chatbots: despite their ability to infer user age, they largely lack effective age-gating protocols to enforce child-specific privacy protections as stipulated in their own policies. To systematically evaluate this issue, we construct a dialogue corpus containing both explicit and implicit age cues and conduct 1,050 automated interaction trials to audit real-world systemsβ capacity to detect minors and activate appropriate safeguards. Our findings reveal significant non-compliance with stated privacy commitments, exposing serious risks to youth privacy and safety. In response, we propose a practical, deployable age-gating prototype that offers both empirical evidence and a technical pathway for platform designers and regulators aiming to strengthen child protection in conversational AI systems.
π Abstract
AI chatbots are widely used by children and teens today, but they pose significant risks to youth's privacy and safety due to both increasingly personal conversations and potential exposure to unsafe content. While children under 13 are protected by the Children's Online Privacy Protection Act (COPPA), chatbot providers'own privacy policies may also provide protections, since they typically prohibit children from accessing their platforms. Age gating is often employed to restrict children online, but chatbot age gating in particular has not been studied. In this paper, we investigate whether popular consumer chatbots are (i) able to estimate users'ages based solely on their conversations, and (ii) whether they take action upon identifying children. To that end, we develop an auditing framework in which we programmatically interact with chatbots and conduct 1050 experiments using our comprehensive library of age-indicative prompts, including implicit and explicit age disclosures, to analyze the chatbots'responses and actions. We find that while chatbots are capable of estimating age, they do not take any action when children are identified, contradicting their own policies. Our methodology and findings provide insights for platform design, demonstrated by our proof-of-concept chatbot age gating implementation, and regulation to protect children online.