Perceptions of AI Bad Behavior: Variations on Discordant Non-Performance

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the gap in understanding how non-experts morally evaluate harmful behaviors of generative AI—particularly large language models—by investigating lay moral cognition mechanisms. Employing semi-structured interviews and inductive thematic analysis, it integrates Moral Foundations Theory, Construal Level Theory (psychological distance), and the Moral Dyad framework to develop a cross-theoretical integrative model. The research identifies two novel, empirically grounded categories of AI misbehavior: “non-performance” (functional failure) and “social miscoordination” (violation of interpersonal or societal expectations). Findings reveal that public moral judgments are highly context-sensitive and contingent upon perceived moral violation severity and alignment with prevailing social norms. The study yields an extensible, theory-informed taxonomy of AI harms, providing empirical and conceptual foundations for AI ethics governance, human-AI interaction design, and public AI literacy initiatives. (149 words)

Technology Category

Application Category

📝 Abstract
Popular discourses are thick with narratives of generative AI's problematic functions and outcomes, yet there is little understanding of how non-experts consider AI activities to constitute bad behavior. This study starts to bridge that gap through inductive analysis of interviews with non-experts (N = 28) focusing on large-language models in general and their bad behavior, specifically. Results suggest bad behaviors are not especially salient when people discuss AI generally but the notion of AI behaving badly is easily engaged when prompted, and bad behavior becomes even more salient when evaluating specific AI behaviors. Types of observed behaviors considered bad mostly align with their inspiring moral foundations; across all observed behaviors, some variations on non-performance and social discordance were present. By scaffolding findings at the intersections of moral foundations theory, construal level theory, and moral dyadism, a tentative framework for considering AI bad behavior is proposed.
Problem

Research questions and friction points this paper is trying to address.

Examining non-experts' perceptions of AI bad behavior
Analyzing variations in AI non-performance and social discordance
Proposing framework integrating moral foundations and construal theories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inductive analysis of non-expert interviews
Scaffolding moral foundations and construal theories
Proposing framework for AI bad behavior evaluation