Future progress in artificial intelligence: A survey of expert opinion

πŸ“… 2025-08-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates expert consensus and divergence regarding the timeline and existential risks associated with Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). We conducted a structured, global survey across four distinct cohorts of AI researchers and practitioners, yielding the first systematic, quantitative dataset of probabilistic forecasts on AGI/ASI emergence. Results indicate a median predicted AGI arrival window of 2040–2050 (50% probability), rising to 90% by 2075; ASI is projected to emerge within ~30 years post-AGI, with 33% of experts judging its net impact as potentially catastrophic for humanity. Our contributions are threefold: (1) the most comprehensive empirical mapping to date of AGI/ASI timelines and risk perceptions; (2) robust evidence of widespread concern among AI experts about existential risk; and (3) an empirically grounded benchmark to inform AI governance frameworks and long-term AI safety research.

Technology Category

Application Category

πŸ“ Abstract
There is, in some quarters, concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular time-frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be 'bad' or 'extremely bad' for humanity.
Problem

Research questions and friction points this paper is trying to address.

Surveying expert opinions on AI development timelines
Assessing risks of high-level machine intelligence emergence
Evaluating probability of AI impact on humanity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveyed expert opinions via questionnaire
Collected median estimates on AI timelines
Assessed risks of superintelligent AI development
πŸ”Ž Similar Papers
No similar papers found.