🤖 AI Summary
Despite significant advances in AI for mathematical and scientific tasks, contemporary models frequently fail on elementary deductive reasoning, exposing a fundamental deficiency in reliable logical inference. This failure stems from the statistical learning paradigm, which optimizes average-case performance over data distributions rather than guaranteeing correctness across all inputs.
Method: We propose “exact learning”—a non-statistical, deterministic learning framework grounded in formal logic, computability theory, and symbolic deductive mechanisms—requiring provably correct outputs for every input, not merely probabilistic approximations.
Contribution/Results: (1) We formally establish that exact learning is a necessary condition for general intelligence; (2) we identify systematic, inherent limitations of current models in sound deductive reasoning; and (3) we provide both theoretical foundations and concrete technical pathways for a foundational paradigm shift in AI—from probabilistic approximation toward logically guaranteed correctness.
📝 Abstract
Sound deductive reasoning -- the ability to derive new knowledge from existing facts and rules -- is an indisputably desirable aspect of general intelligence. Despite the major advances of AI systems in areas such as math and science, especially since the introduction of transformer architectures, it is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable deductive reasoning tasks. Hence, these systems are unfit to fulfill the dream of achieving artificial general intelligence capable of sound deductive reasoning. We argue that their unsound behavior is a consequence of the statistical learning approach powering their development. To overcome this, we contend that to achieve reliable deductive reasoning in learning-based AI systems, researchers must fundamentally shift from optimizing for statistical performance against distributions on reasoning problems and algorithmic tasks to embracing the more ambitious exact learning paradigm, which demands correctness on all inputs. We argue that exact learning is both essential and possible, and that this ambitious objective should guide algorithm design.