Learning the value systems of agents with preference-based and inverse reinforcement learning

📅 2026-02-03
🏛️ Autonomous Agents and Multi-Agent Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a central challenge in AI governance: automatically learning the heterogeneous moral value systems held by human users in open multi-agent systems to achieve ethical alignment. The work presents the first formalization of value system learning as a computable problem and introduces an automated framework that eliminates the need for labor-intensive manual annotation. By observing human behaviors and demonstrations, the approach integrates preference learning with inverse reinforcement learning within a multi-objective Markov decision process to infer context-sensitive value grounding functions and their associated weights. Experiments in two simulated environments demonstrate that the method effectively reconstructs human-aligned value systems from demonstration data, offering a scalable and interpretable pathway toward ethically aligned artificial intelligence.

Technology Category

Application Category

📝 Abstract
Agreement Technologies refer to open computer systems in which autonomous software agents interact with one another, typically on behalf of humans, in order to come to mutually acceptable agreements. With the advance of AI systems in recent years, it has become apparent that such agreements, in order to be acceptable to the involved parties, must remain aligned with ethical principles and moral values. However, this is notoriously difficult to ensure, especially as different human users (and their software agents) may hold different value systems, i.e. they may differently weigh the importance of individual moral values. Furthermore, it is often hard to specify the precise meaning of a value in a particular context in a computational manner. Methods to estimate value systems based on human-engineered specifications, e.g. based on value surveys, are limited in scale due to the need for intense human moderation. In this article, we propose a novel method to automatically learn value systems from observations and human demonstrations. In particular, we propose a formal model of the value system learning problem, its instantiation to sequential decision-making domains based on multi-objective Markov decision processes, as well as tailored preference-based and inverse reinforcement learning algorithms to infer value grounding functions and value systems. The approach is illustrated and evaluated by two simulated use cases.
Problem

Research questions and friction points this paper is trying to address.

value systems
ethical alignment
multi-agent systems
moral values
value learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

value system learning
preference-based reinforcement learning
inverse reinforcement learning
multi-objective MDP
value alignment
🔎 Similar Papers
No similar papers found.
A
Andrés Holgado-Sánchez
CETINIA, Universidad Rey Juan Carlos, Tulipán (Unnumbered), Móstoles, 28933, Madrid, Spain
Holger Billhardt
Holger Billhardt
Universidad Rey Juan Carlos
A
Alberto Fernández
CETINIA, Universidad Rey Juan Carlos, Tulipán (Unnumbered), Móstoles, 28933, Madrid, Spain
S
Sascha Ossowski
CETINIA, Universidad Rey Juan Carlos, Tulipán (Unnumbered), Móstoles, 28933, Madrid, Spain