π€ AI Summary
This study investigates expert consensus and divergence regarding the timeline and existential risks associated with Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). We conducted a structured, global survey across four distinct cohorts of AI researchers and practitioners, yielding the first systematic, quantitative dataset of probabilistic forecasts on AGI/ASI emergence. Results indicate a median predicted AGI arrival window of 2040β2050 (50% probability), rising to 90% by 2075; ASI is projected to emerge within ~30 years post-AGI, with 33% of experts judging its net impact as potentially catastrophic for humanity. Our contributions are threefold: (1) the most comprehensive empirical mapping to date of AGI/ASI timelines and risk perceptions; (2) robust evidence of widespread concern among AI experts about existential risk; and (3) an empirically grounded benchmark to inform AI governance frameworks and long-term AI safety research.
π Abstract
There is, in some quarters, concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular time-frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be 'bad' or 'extremely bad' for humanity.