HAVA: Hybrid Approach to Value-Alignment through Reward Weighing for Reinforcement Learning

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses value alignment in reinforcement learning by proposing a unified framework that jointly incorporates explicit legal/safety regulations and implicit social norms. Methodologically, it introduces the first integrated modeling approach that synergistically combines logic-based formal specifications with neural representations of social norms, alongside a differentiable “reputation” mechanism that continuously monitors agent compliance and dynamically reweights rewards—thereby avoiding policy degradation from hard constraints. The framework enables end-to-end training in continuous state spaces and unifies formal verification, representation learning, and reward shaping. Empirical evaluation on traffic control demonstrates substantial improvements in safety, fairness, and trustworthiness without compromising task performance. Ablation studies confirm that, compared to single-source norm integration, the hybrid framework enhances both the stability of value alignment and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Our society is governed by a set of norms which together bring about the values we cherish such as safety, fairness or trustworthiness. The goal of value-alignment is to create agents that not only do their tasks but through their behaviours also promote these values. Many of the norms are written as laws or rules (legal / safety norms) but even more remain unwritten (social norms). Furthermore, the techniques used to represent these norms also differ. Safety / legal norms are often represented explicitly, for example, in some logical language while social norms are typically learned and remain hidden in the parameter space of a neural network. There is a lack of approaches in the literature that could combine these various norm representations into a single algorithm. We propose a novel method that integrates these norms into the reinforcement learning process. Our method monitors the agent's compliance with the given norms and summarizes it in a quantity we call the agent's reputation. This quantity is used to weigh the received rewards to motivate the agent to become value-aligned. We carry out a series of experiments including a continuous state space traffic problem to demonstrate the importance of the written and unwritten norms and show how our method can find the value-aligned policies. Furthermore, we carry out ablations to demonstrate why it is better to combine these two groups of norms rather than using either separately.
Problem

Research questions and friction points this paper is trying to address.

Integrate explicit and implicit norms in reinforcement learning
Align agent behaviors with societal values via reward weighing
Combine legal/safety and social norms for value-aligned policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid approach combining written and unwritten norms
Reputation-based reward weighing for value-alignment
Integrates explicit and learned norms in RL
🔎 Similar Papers
No similar papers found.