Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models

📅 2024-08-27
🏛️ ACM Journal on Responsible Computing
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study examines how feedback mechanisms in large language model (LLM) interfaces—characterized by simplification, fragmentation, and performance-oriented design—undermine collective deliberation and deep user engagement, thereby exacerbating power asymmetries among users, the public, and AI corporations. Drawing on affordance theory (with its mechanism–condition dual framework), participatory design theory, and empirical investigation with early adopters, the work conducts the first critical infrastructure analysis of feedback functionalities in mainstream LLM interfaces such as ChatGPT. Key contributions include: (1) identifying the latent suppressive logic through which feedback design constrains democratic participation; and (2) proposing an “infrastructuralization”-oriented framework for participatory AI redesign—centered on extensibility, visibility, and co-governance—to advance more inclusive and reflexive LLM co-evolution.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces, shifting the dynamics of participation in AI development. This paper examines how interactive feedback features in ChatGPT’s interface afford user participation in LLM iteration. Drawing on a survey of early ChatGPT users and applying the mechanisms and conditions framework of affordances, we analyse how these features shape user input. Our analysis indicates that these features encourage simple, frequent, and performance-focused feedback while discouraging collective input and discussions among users. Drawing on participatory design literature, we argue such constraints, if replicated across broader user bases, risk reinforcing power imbalances between users, the public, and companies developing LLMs. Our analysis contributes to the growing literature on participatory AI by critically examining the limitations of existing feedback processes and proposing directions for redesign. Rather than focusing solely on aligning model outputs with specific user preferences, we advocate for creating infrastructure that supports sustained dialogue about the purpose and applications of LLMs. This approach requires attention to the ongoing work of “infrastructuring”—creating and sustaining the social, technical, and institutional structures necessary to address matters of concern to stakeholders impacted by LLM development and deployment.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how ChatGPT's feedback features shape user participation patterns
Identifies constraints on collective input and discussions in LLM interfaces
Examines power imbalances between users and companies in AI development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing feedback features in LLM interfaces
Identifying constraints on user participation
Proposing infrastructure for sustained stakeholder dialogue
🔎 Similar Papers
No similar papers found.
Ned Cooper
Ned Cooper
Cornell University
Human-AI InteractionAI Policy
A
Alexandra Zafiroglu
Australian National University, Australia