Explaining Necessary Truths

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional causal explanation models fail to account for how humans explain logically necessary truths (e.g., mathematical theorems), which lack contingent causal antecedents. Method: We propose a “computational explanation” framework, modeling explanation as the emergence of structural simplifications during deductive reasoning; when such simplifications are absent, agents adopt revised false premises as fictive yet explanatory causal anchors—termed “error-driven explanation.” We formalize this as a search-process phenomenon, integrating computational complexity theory, SAT-solving modeling, and cognitive simulations using GPT-4o. Contribution/Results: Our model successfully reproduces human explanatory behavior across SAT puzzles varying in logical complexity and plausibility. It validates core theoretical predictions and generates falsifiable psychological hypotheses, thereby bridging critical gaps among formal logic, cognitive science, and computational modeling of explanation.

Technology Category

Application Category

📝 Abstract
Knowing the truth is rarely enough -- we also seek out reasons why the fact is true. While much is known about how we explain contingent truths, we understand less about how we explain facts, such as those in mathematics, that are true as a matter of logical necessity. We present a framework, based in computational complexity, where explanations for deductive truths co-emerge with discoveries of simplifying steps during the search process. When such structures are missing, we revert, in turn, to error-based reasons, where a (corrected) mistake can serve as fictitious, but explanatory, contingency-cause: not making the mistake serves as a reason why the truth takes the form it does. We simulate human subjects, using GPT-4o, presented with SAT puzzles of varying complexity and reasonableness, validating our theory and showing how its predictions can be tested in future human studies.
Problem

Research questions and friction points this paper is trying to address.

Explaining logical necessary truths
Computational complexity framework
Simulating human reasoning with AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computational complexity framework
Error-based explanatory reasons
GPT-4o for human simulation
🔎 Similar Papers
No similar papers found.
G
Gulce Kardecs
Department of Computer Science, University of Colorado, Boulder, CO 80309, USA & the Santa Fe Institute, Santa Fe, NM 87501, USA
Simon DeDeo
Simon DeDeo
Carnegie Mellon University & the Santa Fe Institute
cognitive sciencecultural evolution