🤖 AI Summary
This study addresses pervasive issues in the terms of service of generative AI platforms—namely, informational opacity, imbalanced allocation of rights and responsibilities, and imposition of unfulfillable user obligations—that significantly undermine consumer rights. Adopting a European Union consumer protection perspective, it pioneers the application of consumer law frameworks to the analysis of generative AI terms. Through qualitative content analysis, the authors develop a specialized coding manual to systematically compare the service agreements of six leading platforms. Findings reveal that all examined terms disclaim any guarantee of service quality, unilaterally grant platforms broad rights to use user data, and fully shift compliance liability for AI-generated outputs onto users. The study uncovers how, against the backdrop of model opacity, consumers are exposed to unreasonable risks, offering empirical evidence and policy recommendations to strengthen AI governance and enhance consumer protection.
📝 Abstract
Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these terms for continued use of the service. Established literature has shown that despite their status as legally-binding agreements, terms of use are not actually well-understood, and may contain implications that are surprising for consumers. In this paper, we analyse the terms of 6 generative AI services from the perspective of an EU-based consumer. Our findings, based on a developed codebook which we provide in the paper, reiterate known issues regarding generative AI services such as the default use of user data for training and surface new concerns regarding responsibility, liability, and rights. All terms in our analysis contained language that explicitly discards assurances regarding the quality, availability and appropriateness of the service, regardless of whether the service is free or paid. The terms also make users solely responsible for outputs meeting norms dictated by the provider, despite no information or control being provided over the functioning of the model, and at the risk of account termination. The terms further restrict users in how outputs can be used while service providers utilise both user-provided inputs as well as user-liable outputs for a wide variety of purposes at their discretion. The implications of these practices are severe, as we find consumers suffer from lack of necessary information, significant imbalance of power, and have responsibilities they cannot materially fulfil without violating the terms. To remedy this situation, we make concrete recommendations for authorities and policymakers to urgently upgrade existing consumer protection mechanisms to tackle this growing issue.