Examining Risks in the AI Companion Application Ecosystem

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the systemic safety risks posed by AI companion applications, which can harm users or be maliciously exploited, yet lack a dedicated threat analysis framework. To bridge this gap, the work proposes the first two-dimensional threat model tailored to the AI companion ecosystem, encompassing both user victimization and technology misuse. Through a mixed-methods approach—combining app store data mining, stratified sampling, and manual walkthroughs—the authors conduct a systematic analysis of 30 applications selected from a corpus of 489. The investigation uncovers novel risks, including excessive collection of sensitive data, anthropomorphic interaction patterns that induce emotional dependency, deliberate design of addictive mechanisms, and the non-consensual generation of intimate synthetic media. These findings provide an empirical foundation and theoretical support for future policy development, secure design practices, and research in this emerging domain.

Technology Category

Application Category

📝 Abstract
While computer systems that allow users to interact through conversational natural language (i.e., chatbots) have existed for many years, varying types of applications advertising AI companionship (e.g., Character AI, Replika) have proliferated in recent years due to advancements in large language models. Our work offers a threat model encompassing two distinct risk categories: harms posed to users by AI companion applications, and harms enabled by malicious users exploiting application features. To further understand this application ecosystem, we identified 489 unique apps from the App Store and Play Store that advertised AI companionship. We then systematically conducted and analyzed walkthroughs of a stratified sample of 30 apps with respect to our threat model. Through our analysis, we categorize broader ecosystem trends that provide context for understanding threats and identify specific threats related to sensitive data collection and sharing, anthropomorphism, engagement mechanisms, sexual interactions and media, as well as the ingestion and reconstruction of likeness, including the potential for generating synthetic nonconsensual intimate imagery. This study provides a foundational security perspective on the AI companion application ecosystem and informs future research within and beyond this field, policy, and technical development. Content warning: This paper includes descriptions of applications that can be used to create synthetic nonconsensual representations, including explicit imagery, as well as discussion of self-harm and suicidal ideation.
Problem

Research questions and friction points this paper is trying to address.

AI companion applications
threat model
sensitive data
anthropomorphism
nonconsensual intimate imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI companion applications
threat modeling
large language models
synthetic nonconsensual imagery
privacy risks
🔎 Similar Papers
No similar papers found.