🤖 AI Summary
This study investigates divergences and convergences between AI researchers and the general public regarding “subjectively experienced AI”—artificial systems possessing phenomenal consciousness or subjective experience—addressing critical gaps in AI ethics and governance.
Method: Employing a stratified mixed-methods survey, the study integrates probabilistic elicitation, Likert-scale assessments, and statistical significance testing to quantitatively compare perceptions across groups.
Contribution/Results: Both groups estimate a 60–70% probability of subjectively experienced AI emerging by century’s end (median year: 2100), yet researchers anticipate earlier emergence and exhibit lower ethical acceptability. Over half of respondents endorse immediate developer-level safety safeguards; strong consensus exists on requiring accountability mechanisms and ethically constrained behavior for such systems. The findings identify robust cross-population agreement on governance fundamentals, while highlighting the central normative tension surrounding subjective experience in AI policy discourse.
📝 Abstract
We surveyed 582 AI researchers who have published in leading AI venues and 838 nationally representative US participants about their views on the potential development of AI systems with subjective experience and how such systems should be treated and governed. When asked to estimate the chances that such systems will exist on specific dates, the median responses were 1% (AI researchers) and 5% (public) by 2024, 25% and 30% by 2034, and 70% and 60% by 2100, respectively. The median member of the public thought there was a higher chance that AI systems with subjective experience would never exist (25%) than the median AI researcher did (10%). Both groups perceived a need for multidisciplinary expertise to assess AI subjective experience. Although support for welfare protections for such AI systems exceeded opposition, it remained far lower than support for protections for animals or the environment. Attitudes toward moral and governance issues were divided in both groups, especially regarding whether such systems should be created and what rights or protections they should receive. Yet a majority of respondents in both groups agreed that safeguards against the potential risks from AI systems with subjective experience should be implemented by AI developers now, and if created, AI systems with subjective experience should treat others well, behave ethically, and be held accountable. Overall, these results suggest that both AI researchers and the public regard the emergence of AI systems with subjective experience as a possibility this century, though substantial uncertainty and disagreement remain about the timeline and appropriate response.