Agent-Supported Foresight for AI Systemic Risks: AI Agents for Breadth, Experts for Judgment

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the Collingridge dilemma—the challenge that, in the early stages of AI development, human cognitive limitations hinder the anticipation of systemic risks—by proposing the first hybrid foresight assessment framework integrating AI agents and human experts. Combining agent-based simulation, Delphi forecasting, Technology Readiness Level (TRL) staging, and large-scale expert and public engagement experiments, the framework leverages AI agents to broadly explore potential consequences while relying on experts for contextual judgment. Applied across four AI use cases at varying maturity levels, the approach generated 86–110 consequences per case via AI agents, identifying 27–47 distinct systemic risks. These findings substantially complement the limited set of high-probability risks identified by experts and the emotionally charged concerns highlighted by the public, demonstrating the framework’s complementary strength in achieving both breadth of risk coverage and depth of evaluative insight.

Technology Category

Application Category

📝 Abstract
AI impact assessments often stress near-term risks because human judgment degrades over longer horizons, exemplifying the Collingridge dilemma: foresight is most needed when knowledge is scarcest. To address long-term systemic risks, we introduce a scalable approach that simulates in-silico agents using the strategic foresight method of the Futures Wheel. We applied it to four AI uses spanning Technology Readiness Levels (TRLs): Chatbot Companion (TRL 9, mature), AI Toy (TRL 7, medium), Griefbot (TRL 5, low), and Death App (TRL 2, conceptual). Across 30 agent runs per use, agents produced 86-110 consequences, condensed into 27-47 unique risks. To benchmark the agent outputs against human perspectives, we collected evaluations from 290 domain experts and 7 leaders, and conducted Futures Wheel sessions with 42 experts and 42 laypeople. Agents generated many systemic consequences across runs. Compared with these outputs, experts identified fewer risks, typically less systemic but judged more likely, whereas laypeople surfaced more emotionally salient concerns that were generally less systemic. We propose a hybrid foresight workflow, wherein agents broaden systemic coverage, and humans provide contextual grounding. Our dataset is available at: https://social-dynamics.net/ai-risks/foresight.
Problem

Research questions and friction points this paper is trying to address.

AI systemic risks
strategic foresight
Collingridge dilemma
long-term AI governance
impact assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI agents
systemic risk
strategic foresight
Futures Wheel
hybrid foresight
🔎 Similar Papers
No similar papers found.