🤖 AI Summary
This paper identifies and formalizes a novel paradox—the Decision-Evaluation Paradox: under realistic axiomatic structures, the same set of moral or rational axioms may yield logical inconsistencies when applied jointly to prescriptive decision-making and retrospective evaluation. To systematically analyze this issue, we develop the first axiomatic decision modeling framework, introduce a structural taxonomy of axioms, and rigorously derive their behavioral implications using formal logic and decision theory. Our core contribution is the first precise definition and proof of the paradox’s universality: directly training AI models on decision data, or uncritically applying identical axiom sets to both decision generation and outcome assessment, induces systematic bias and normative failure. This work provides foundational methodological guidance and critical warnings for AI ethics alignment, interpretable decision systems, and axiomatic modeling—highlighting the necessity of distinguishing between prescriptive and evaluative axiom usage.
📝 Abstract
We introduce a framework for modeling decisions with axioms that are statements about decisions, e.g., ethical constraints. Using our framework we define a taxonomy of decision axioms based on their structural properties and demonstrate a tension between the use of axioms to make decisions and the use of axioms to evaluate decisions which we call the Decision-Evaluation Paradox. We argue that the Decision-Evaluation Paradox arises with realistic axiom structures, and the paradox illuminates why one must be exceptionally careful when training models on decision data or applying axioms to make and evaluate decisions.