🤖 AI Summary
This paper addresses the challenge of identifying and weakly assessing legal, ethical, and societal risks arising from anthropomorphic AI systems (AI automatons) in design and deployment. To this end, it proposes the first structured “design axis” framework. Through systematic literature review, multi-case analysis, and interdisciplinary ethics–technology mapping, the framework characterizes the design space of AI automatons and models three core dimensions: referent mimicry, interaction modality, and transparency level. By unifying fragmented engineering practices at the analyzable and intervenable design-decision level, it bridges the gap between technical implementation and socio-impact assessment. The resulting framework yields an interpretable, actionable conceptual taxonomy that supports policy formulation, engineering standards, and ethical review. It thereby enhances the systematicity and traceability of responsible innovation in AI automaton development.
📝 Abstract
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness -- systems we dub AI automatons. Individuals, groups, or generic humans are being simulated to produce creative work in their styles, to respond to surveys in their places, to probe how they would use a new system before deployment, to provide users with assistance and companionship, and to anticipate their possible future behavior and interactions with others, just to name a few applications. The research, design, deployment, and availability of such AI systems have, however, also prompted growing concerns about a wide range of possible legal, ethical, and other social impacts. To both 1) facilitate productive discussions about whether, when, and how to design and deploy such systems, and 2) chart the current landscape of existing and prospective AI automatons, we need to tease apart determinant design axes and considerations that can aid our understanding of whether and how various design choices along these axes could mitigate -- or instead exacerbate -- potential adverse impacts that the development and use of AI automatons could give rise to. In this paper, through a synthesis of related literature and extensive examples of existing AI systems intended to mimic humans, we develop a conceptual framework to help foreground key axes of design variations and provide analytical scaffolding to foster greater recognition of the design choices available to developers, as well as the possible ethical implications these choices might have.