Limitations on Safe, Trusted, Artificial General Intelligence

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper establishes a fundamental incompatibility among safety, trustworthiness, and artificial general intelligence (AGI). We first formalize “safety” as the invariant property that the system never outputs falsehoods; “trustworthiness” as the external verifiability of safety; and “AGI” as sustained capability at least equal to that of humans. Leveraging computability theory, formal logic, and graph reachability analysis, we prove that any AI system simultaneously satisfying safety and trustworthiness must fail to decide certain tasks—such as specific program verification, planning, and graph reachability problems—that are trivially solvable by humans. This result parallels Gödel’s incompleteness theorems and the undecidability of the halting problem, revealing an inherent theoretical limit on AGI safety. It establishes a formal, insurmountable boundary for trustworthy AI research.

Technology Category

Application Category

📝 Abstract
Safety, trust and Artificial General Intelligence (AGI) are aspirational goals in artificial intelligence (AI) systems, and there are several informal interpretations of these notions. In this paper, we propose strict, mathematical definitions of safety, trust, and AGI, and demonstrate a fundamental incompatibility between them. We define safety of a system as the property that it never makes any false claims, trust as the assumption that the system is safe, and AGI as the property of an AI system always matching or exceeding human capability. Our core finding is that -- for our formal definitions of these notions -- a safe and trusted AI system cannot be an AGI system: for such a safe, trusted system there are task instances which are easily and provably solvable by a human but not by the system. We note that we consider strict mathematical definitions of safety and trust, and it is possible for real-world deployments to instead rely on alternate, practical interpretations of these notions. We show our results for program verification, planning, and graph reachability. Our proofs draw parallels to Gödel's incompleteness theorems and Turing's proof of the undecidability of the halting problem, and can be regarded as interpretations of Gödel's and Turing's results.
Problem

Research questions and friction points this paper is trying to address.

Defining strict mathematical frameworks for safety, trust and AGI
Proving fundamental incompatibility between safe trusted systems and AGI
Establishing limitations through formal proofs in computational theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defined strict mathematical safety and trust criteria
Proved incompatibility between safe systems and AGI
Applied proofs to program verification and planning
🔎 Similar Papers
No similar papers found.