Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier

📅 2024-07-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a critical societal risk: high-capability AI systems—such as large language models—may erode “mental proof,” a foundational social mechanism wherein agents credibly signal unobservable mental states (e.g., intentions, values) through observable behavior, thereby exacerbating cooperation failures in low-trust environments. Method: We develop the first integrative theoretical framework unifying game theory, signaling economics, evolutionary biology, and AI behavioral modeling; formally characterize necessary conditions for mental proof; and systematically analyze three AI-mediated interference pathways—reduced signaling costs, diminished intent discernibility, and dilution of behavioral scarcity. Contribution/Results: We identify and empirically validate the counterintuitive “thinking becomes easier → cooperation becomes harder” paradox, offering a testable theoretical benchmark and mechanistic account for assessing AI’s societal impact on trust and cooperation.

Technology Category

Application Category

📝 Abstract
Large language models and other highly capable AI systems ease the burdens of deciding what to say or do, but this very ease can undermine the effectiveness of our actions in social contexts. We explain this apparent tension by introducing the integrative theoretical concept of"mental proof,"which occurs when observable actions are used to certify unobservable mental facts. From hiring to dating, mental proofs enable people to credibly communicate values, intentions, states of knowledge, and other private features of their minds to one another in low-trust environments where honesty cannot be easily enforced. Drawing on results from economics, theoretical biology, and computer science, we describe the core theoretical mechanisms that enable people to effect mental proofs. An analysis of these mechanisms clarifies when and how artificial intelligence can make low-trust cooperation harder despite making thinking easier.
Problem

Research questions and friction points this paper is trying to address.

AI undermines social cooperation
Mental proof concept explained
AI complicates low-trust environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI eases decision-making burdens
Introduces 'mental proof' concept
Analyzes AI's impact on cooperation
🔎 Similar Papers
No similar papers found.
Z
Zachary Wojtowicz
Department of Economics and Harvard Business School, Harvard University, Cambridge, MA, United States
Simon DeDeo
Simon DeDeo
Carnegie Mellon University & the Santa Fe Institute
cognitive sciencecultural evolution