AI, Digital Platforms, and the New Systemic Risk

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep integration of AI with digital platforms engenders novel systemic risks—including multi-agent cascading failures, large-scale discrimination, and systemic hallucinations—yet existing regulatory frameworks (e.g., the EU AI Act and Digital Services Act) suffer from narrow risk definitions, incomplete coverage, and insufficient cross-regulatory coordination. Method: We propose a four-layer taxonomy of AI systemic risk, synthesizing analytical paradigms from finance, complex systems science, climate resilience, and cybersecurity; complemented by theoretical modeling and comparative analysis of multinational case studies. Contribution/Results: Our analysis exposes structural gaps in current regulation’s capacity to identify collective harms and interaction-driven risks. We find the Digital Services Act holds greater latent potential for systemic risk detection than the AI Act. Accordingly, we recommend three policy pathways: expanding risk assessment dimensions, explicitly incorporating collective harm into regulatory scope, and strengthening cross-sectoral and cross-jurisdictional regulatory coordination—thereby advancing a resilient governance framework tailored to hybrid intelligent systems.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) becomes increasingly embedded in digital, social, and institutional infrastructures, and AI and platforms are merged into hybrid structures, systemic risk has emerged as a critical but undertheorized challenge. In this paper, we develop a rigorous framework for understanding systemic risk in AI, platform, and hybrid system governance, drawing on insights from finance, complex systems theory, climate change, and cybersecurity - domains where systemic risk has already shaped regulatory responses. We argue that recent legislation, including the EU's AI Act and Digital Services Act (DSA), invokes systemic risk but relies on narrow or ambiguous characterizations of this notion, sometimes reducing this risk to specific capabilities present in frontier AI models, or to harms occurring in economic market settings. The DSA, we show, actually does a better job at identifying systemic risk than the more recent AI Act. Our framework highlights novel risk pathways, including the possibility of systemic failures arising from the interaction of multiple AI agents. We identify four levels of AI-related systemic risk and emphasize that discrimination at scale and systematic hallucinations, despite their capacity to destabilize institutions and fundamental rights, may not fall under current legal definitions, given the AI Act's focus on frontier model capabilities. We then test the DSA, the AI Act, and our own framework on five key examples, and propose reforms that broaden systemic risk assessments, strengthen coordination between regulatory regimes, and explicitly incorporate collective harms.
Problem

Research questions and friction points this paper is trying to address.

Understanding systemic risk in AI and digital platform governance
Analyzing limitations of current legislation in addressing systemic risk
Developing framework for novel risk pathways from AI interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed framework for systemic risk governance
Identified novel risk pathways from AI interactions
Proposed reforms broadening risk assessment coordination
🔎 Similar Papers