🤖 AI Summary
This paper addresses the fundamental disagreement between “doomist” and “optimist” camps in AI risk discourse—not as a conflict of values, but as divergent epistemic assumptions regarding the limits of human rationality and the evolutionary dynamics of AI systems (e.g., emergence, applicability of historical theories). Method: We propose the first causally grounded framework for analyzing such disagreements, decomposing contested claims into four premise types—definitional, factual, causal, and moral—and introduce a novel large language model ensemble method to automatically extract and structure reasoning chains from large-scale debate corpora. Contribution/Results: Experiments successfully isolate core points of divergence in debates over existential (X) risks versus economic/social (E) risks, demonstrating the framework’s scalability and validity on public risk topics. The approach establishes a reproducible, evidence-based methodology for empirical analysis of techno-ethical controversies.
📝 Abstract
The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.