๐ค AI Summary
This study addresses the widespread yet underexamined phenomenon of account sharing on large language model (LLM) platforms, which are predominantly designed for single-user interaction. Through 245 survey responses and 36 in-depth interviews, the research systematically identifies four distinct types of LLM account sharing and introduces the โobserver effectโ to explain how users adjust their behaviors when perceiving surveillance. Grounded in sociological theory, the work conceptualizes account sharing as a form of socio-technical appropriation, revealing the emergent norms and heightened privacy vulnerabilities that arise in shared usage contexts. These findings provide empirical foundations and design recommendations for adapting LLM platforms to multi-user scenarios, thereby advancing AI services beyond the single-user paradigm toward more socially embedded interaction models.
๐ Abstract
Account sharing is common in subscription services and is now extending to generative AI platforms, which are still primarily designed for individual use. Sharing often requires workarounds that create new tensions. This study examines how LLM subscriptions are shared and the norms that develop. We combined a survey of 245 users with interviews of 36 participants to understand both patterns and lived experiences. Our analysis identified four types of account sharing, organized along two dimensions: whether the owner uses the account and whether subscription costs are shared. Within these types, we examined how norms were formed and how their fragility, especially privacy, became evident in practice. Users, fully aware of this, subtly adjusted their behavior, which we interpret through the lens of the observer effect. We frame LLM account sharing as a social practice of appropriation and outline design implications to adapt single-user platforms to multi-user realities.