🤖 AI Summary
Traditional causal explanation models fail to account for how humans explain logically necessary truths (e.g., mathematical theorems), which lack contingent causal antecedents.
Method: We propose a “computational explanation” framework, modeling explanation as the emergence of structural simplifications during deductive reasoning; when such simplifications are absent, agents adopt revised false premises as fictive yet explanatory causal anchors—termed “error-driven explanation.” We formalize this as a search-process phenomenon, integrating computational complexity theory, SAT-solving modeling, and cognitive simulations using GPT-4o.
Contribution/Results: Our model successfully reproduces human explanatory behavior across SAT puzzles varying in logical complexity and plausibility. It validates core theoretical predictions and generates falsifiable psychological hypotheses, thereby bridging critical gaps among formal logic, cognitive science, and computational modeling of explanation.
📝 Abstract
Knowing the truth is rarely enough -- we also seek out reasons why the fact is true. While much is known about how we explain contingent truths, we understand less about how we explain facts, such as those in mathematics, that are true as a matter of logical necessity. We present a framework, based in computational complexity, where explanations for deductive truths co-emerge with discoveries of simplifying steps during the search process. When such structures are missing, we revert, in turn, to error-based reasons, where a (corrected) mistake can serve as fictitious, but explanatory, contingency-cause: not making the mistake serves as a reason why the truth takes the form it does. We simulate human subjects, using GPT-4o, presented with SAT puzzles of varying complexity and reasonableness, validating our theory and showing how its predictions can be tested in future human studies.