🤖 AI Summary
Anthropomorphic features of generative AI risk fostering user overtrust and responsibility diffusion, engendering cognitive, occupational, ethical, institutional, and societal harms. To address this gap, we conducted focus group interviews with 30 technology professionals across six occupational roles, applying thematic coding and cross-functional comparative analysis. This study presents the first systematic taxonomy of anthropomorphism-related harms in generative AI. We introduce the conceptual map of “human similarity,” identifying three interrelated dimensions—capability, intention, and identity—that underlie user misattribution and elucidating their mechanistic links to empirically observed risks. Building on these insights, we propose a cross-functional governance framework integrating technical design principles, organizational support structures, and role-specific implementation practices. Our findings provide empirically grounded, theoretically informed foundations for developing AI transparency standards, human-AI collaboration protocols, and occupation-tailored training programs.
📝 Abstract
Generative AI's humanlike qualities are driving its rapid adoption in professional domains. However, this anthropomorphic appeal raises concerns from HCI and responsible AI scholars about potential hazards and harms, such as overtrust in system outputs. To investigate how technology workers navigate these humanlike qualities and anticipate emergent harms, we conducted focus groups with 30 professionals across six job functions (ML engineering, product policy, UX research and design, product management, technology writing, and communications). Our findings reveal an unsettled knowledge environment surrounding humanlike generative AI, where workers' varying perspectives illuminate a range of potential risks for individuals, knowledge work fields, and society. We argue that workers require comprehensive support, including clearer conceptions of ``humanlikeness'' to effectively mitigate these risks. To aid in mitigation strategies, we provide a conceptual map articulating the identified hazards and their connection to conflated notions of ``humanlikeness.''