🤖 AI Summary
Pretrained decision-making AI agents often struggle to simultaneously maximize reward and align with human values in complex, dynamic environments. Method: This paper proposes Test-time Policy Shaping—a post-hoc, training-free technique that dynamically modulates agent behavior during inference. Its core innovation is a model-guided, fine-grained alignment framework integrating a scene-action ethical attribute classifier with multidimensional value trade-off modeling, enabling adjustable and generalizable behavioral constraints. Results: Evaluated on the MACHIAVELLI benchmark comprising 134 text-based games, the method significantly suppresses unethical behaviors (e.g., power-seeking) while preserving task performance. It outperforms both training-time alignment baselines and general-purpose agents, achieving— for the first time—cross-environment, interpretable, and fine-tuning-free value alignment.
📝 Abstract
The deployment of decision-making AI agents presents a critical challenge in maintaining alignment with human values or guidelines while operating in complex, dynamic environments. Agents trained solely to achieve their objectives may adopt harmful behavior, exposing a key trade-off between maximizing the reward function and maintaining the alignment. For the pre-trained agents, ensuring alignment is particularly challenging, as retraining can be a costly and slow process. This is further complicated by the diverse and potentially conflicting attributes representing the ethical values for alignment. To address these challenges, we propose a test-time alignment technique based on model-guided policy shaping. Our method allows precise control over individual behavioral attributes, generalizes across diverse reinforcement learning (RL) environments, and facilitates a principled trade-off between ethical alignment and reward maximization without requiring agent retraining. We evaluate our approach using the MACHIAVELLI benchmark, which comprises 134 text-based game environments and thousands of annotated scenarios involving ethical decisions. The RL agents are first trained to maximize the reward in their respective games. At test time, we apply policy shaping via scenario-action attribute classifiers to ensure decision alignment with ethical attributes. We compare our approach against prior training-time methods and general-purpose agents, as well as study several types of ethical violations and power-seeking behavior. Our results demonstrate that test-time policy shaping provides an effective and scalable solution for mitigating unethical behavior across diverse environments and alignment attributes.