🤖 AI Summary
This paper addresses the core challenge of AI value alignment and safety: designing AI systems that both effectively assist humans and remain safely interruptible—despite unknown, incomplete, and potentially non-Archimedean human utility functions. Methodologically, it introduces the first systematic integration of Bayesian inverse reinforcement learning, ordinal preference inference, nonstandard-analytic utility modeling, and game-theoretic safety constraints. Theoretical contributions include: (i) rigorous sufficient conditions for safe shutdownability, goal consistency, and learning robustness; (ii) formal demonstration that explicit uncertainty modeling, incomplete preference learning, and non-Archimedean utility representations are necessary for AI safety; and (iii) a mathematically grounded framework for value alignment that relaxes standard assumptions of complete rationality and classical utility theory—thereby offering greater generality and empirical fidelity to real-world human decision-making.
📝 Abstract
How can we ensure that AI systems are aligned with human values and remain safe? We can study this problem through the frameworks of the AI assistance and the AI shutdown games. The AI assistance problem concerns designing an AI agent that helps a human to maximise their utility function(s). However, only the human knows these function(s); the AI assistant must learn them. The shutdown problem instead concerns designing AI agents that: shut down when a shutdown button is pressed; neither try to prevent nor cause the pressing of the shutdown button; and otherwise accomplish their task competently. In this paper, we show that addressing these challenges requires AI agents that can reason under uncertainty and handle both incomplete and non-Archimedean preferences.