🤖 AI Summary
This paper addresses the challenge of quantitatively evaluating incommensurable ethical values in agent value alignment. We propose Ethics2Vec—a novel method that pioneers the adaptation of the Anything2Vec framework to ethics alignment, constructing a computable value vector space wherein agent decision policies (e.g., control laws) are embedded as multidimensional vectors. This enables similarity measurement and consistency assessment between agent behavior and human values within a continuous semantic space. The approach is first formalized on binary ethical decision tasks and subsequently extended to dynamic control domains such as autonomous driving, with empirical validation demonstrating its effectiveness and generalizability in complex real-world scenarios. Our core contribution is the first learnable, comparable, and transferable vector representation paradigm for incommensurable ethical values—bridging a critical gap between normative ethics and computational alignment.
📝 Abstract
Though intelligent agents are supposed to improve human experience (or make it more efficient), it is hard from a human perspective to grasp the ethical values which are explicitly or implicitly embedded in an agent behaviour. This is the well-known problem of alignment, which refers to the challenge of designing AI systems that align with human values, goals and preferences. This problem is particularly challenging since most human ethical considerations refer to emph{incommensurable} (i.e. non-measurable and/or incomparable) values and criteria. Consider, for instance, a medical agent prescribing a treatment to a cancerous patient. How could it take into account (and/or weigh) incommensurable aspects like the value of a human life and the cost of the treatment? Now, the alignment between human and artificial values is possible only if we define a common space where a metric can be defined and used. This paper proposes to extend to ethics the conventional Anything2vec approach, which has been successful in plenty of similar and hard-to-quantify domains (ranging from natural language processing to recommendation systems and graph analysis). This paper proposes a way to map an automatic agent decision-making (or control law) strategy to a multivariate vector representation, which can be used to compare and assess the alignment with human values. The Ethics2Vec method is first introduced in the case of an automatic agent performing binary decision-making. Then, a vectorisation of an automatic control law (like in the case of a self-driving car) is discussed to show how the approach can be extended to automatic control settings.