Human-Aligned Skill Discovery: Balancing Behaviour Exploration and Alignment

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prevalence of unsafe and impractical behaviors in unsupervised skill discovery for reinforcement learning, this paper proposes the first framework that fully integrates human value alignment constraints throughout the skill discovery process. Methodologically, it unifies inverse reinforcement learning with preference modeling to incorporate human feedback, designs a multi-objective gradient optimization algorithm, and introduces an adjustable-strength alignment mechanism to jointly optimize skill diversity and value alignment. Evaluated on 2D navigation and SafetyGymnasium benchmarks, the approach significantly improves skill safety and practicality: downstream task transfer success rates increase by 37%, while maintaining high behavioral diversity and effectively avoiding unaligned, unproductive exploration.

Technology Category

Application Category

📝 Abstract
Unsupervised skill discovery in Reinforcement Learning aims to mimic humans' ability to autonomously discover diverse behaviors. However, existing methods are often unconstrained, making it difficult to find useful skills, especially in complex environments, where discovered skills are frequently unsafe or impractical. We address this issue by proposing Human-aligned Skill Discovery (HaSD), a framework that incorporates human feedback to discover safer, more aligned skills. HaSD simultaneously optimises skill diversity and alignment with human values. This approach ensures that alignment is maintained throughout the skill discovery process, eliminating the inefficiencies associated with exploring unaligned skills. We demonstrate its effectiveness in both 2D navigation and SafetyGymnasium environments, showing that HaSD discovers diverse, human-aligned skills that are safe and useful for downstream tasks. Finally, we extend HaSD by learning a range of configurable skills with varying degrees of diversity alignment trade-offs that could be useful in practical scenarios.
Problem

Research questions and friction points this paper is trying to address.

Diverse and Safe Behavior
Reinforcement Learning
Ethical Standards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-aligned Skill Discovery
Reinforcement Learning
Safe and Practical Behavior Learning
🔎 Similar Papers
No similar papers found.
M
Maxence Hussonnois
A2I2, Deakin University, Geelong, Australia
T
T. G. Karimpanal
School of IT, Deakin University, Geelong, Australia
Santu Rana
Santu Rana
Associate Professor of Computer Science, Deakin University
Machine LearningBayesian OptimizationRoboticsAdversarial Learning