๐ค AI Summary
This work exposes a fundamental trade-off in AI between provable correctness and broad-domain data mapping capability: symbolic systems achieve zero-error reasoning but are confined to narrow, structured domains, whereas generative models exhibit strong generalization yet inherently incur irreducible errors. Methodologically, the project explicitly axiomatizes this implicit โdeterminismโscopeโ trade-off for the first time, integrating information-theoretic modeling, formal verification, epistemological analysis, and cross-paradigmatic system comparison. The core contribution is a falsifiable information-theoretic inequality conjecture that reconceptualizes AI evaluation criteria and governance logic, catalyzes a paradigm shift toward hybrid intelligent system design, and establishes a rigorous theoretical foundation for future formal proofs. This framework bridges foundational trustworthy AI theory with technical philosophy, advancing both theoretical understanding and practical engineering principles.
๐ Abstract
This article introduces a conjecture that formalises a fundamental trade-off between provable correctness and broad data-mapping capacity in Artificial Intelligence (AI) systems. When an AI system is engineered for deductively watertight guarantees (demonstrable certainty about the error-free nature of its outputs) -- as in classical symbolic AI -- its operational domain must be narrowly circumscribed and pre-structured. Conversely, a system that can input high-dimensional data to produce rich information outputs -- as in contemporary generative models -- necessarily relinquishes the possibility of zero-error performance, incurring an irreducible risk of errors or misclassification. By making this previously implicit trade-off explicit and open to rigorous verification, the conjecture significantly reframes both engineering ambitions and philosophical expectations for AI. After reviewing the historical motivations for this tension, the article states the conjecture in information-theoretic form and contextualises it within broader debates in epistemology, formal verification, and the philosophy of technology. It then offers an analysis of its implications and consequences, drawing on notions of underdetermination, prudent epistemic risk, and moral responsibility. The discussion clarifies how, if correct, the conjecture would help reshape evaluation standards, governance frameworks, and hybrid system design. The conclusion underscores the importance of eventually proving or refuting the inequality for the future of trustworthy AI.