Why AI Safety Requires Uncertainty, Incomplete Preferences, and Non-Archimedean Utilities

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core challenge of AI value alignment and safety: designing AI systems that both effectively assist humans and remain safely interruptible—despite unknown, incomplete, and potentially non-Archimedean human utility functions. Methodologically, it introduces the first systematic integration of Bayesian inverse reinforcement learning, ordinal preference inference, nonstandard-analytic utility modeling, and game-theoretic safety constraints. Theoretical contributions include: (i) rigorous sufficient conditions for safe shutdownability, goal consistency, and learning robustness; (ii) formal demonstration that explicit uncertainty modeling, incomplete preference learning, and non-Archimedean utility representations are necessary for AI safety; and (iii) a mathematically grounded framework for value alignment that relaxes standard assumptions of complete rationality and classical utility theory—thereby offering greater generality and empirical fidelity to real-world human decision-making.

Technology Category

Application Category

📝 Abstract
How can we ensure that AI systems are aligned with human values and remain safe? We can study this problem through the frameworks of the AI assistance and the AI shutdown games. The AI assistance problem concerns designing an AI agent that helps a human to maximise their utility function(s). However, only the human knows these function(s); the AI assistant must learn them. The shutdown problem instead concerns designing AI agents that: shut down when a shutdown button is pressed; neither try to prevent nor cause the pressing of the shutdown button; and otherwise accomplish their task competently. In this paper, we show that addressing these challenges requires AI agents that can reason under uncertainty and handle both incomplete and non-Archimedean preferences.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI alignment with human values and safety
Designing AI agents that learn human utility functions
Creating AI systems that respond appropriately to shutdown commands
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI agents reason under uncertainty
Handle incomplete and non-Archimedean preferences
Use AI assistance and shutdown game frameworks
🔎 Similar Papers
No similar papers found.