Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fundamental disagreement between “doomist” and “optimist” camps in AI risk discourse—not as a conflict of values, but as divergent epistemic assumptions regarding the limits of human rationality and the evolutionary dynamics of AI systems (e.g., emergence, applicability of historical theories). Method: We propose the first causally grounded framework for analyzing such disagreements, decomposing contested claims into four premise types—definitional, factual, causal, and moral—and introduce a novel large language model ensemble method to automatically extract and structure reasoning chains from large-scale debate corpora. Contribution/Results: Experiments successfully isolate core points of divergence in debates over existential (X) risks versus economic/social (E) risks, demonstrating the framework’s scalability and validity on public risk topics. The approach establishes a reproducible, evidence-based methodology for empirical analysis of techno-ethical controversies.

Technology Category

Application Category

📝 Abstract
The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.
Problem

Research questions and friction points this paper is trying to address.

Analyzes differences in AI risk perspectives between doomer and boomer views
Identifies key contention points in existential and employment risk debates
Applies LLM ensemble to decode reasoning chains in public risk discussions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM ensemble analyzes reasoning chains
Parses textual data for contention points
Identifies causal premise differences in debates
🔎 Similar Papers
No similar papers found.