🤖 AI Summary
To address the prevalence of unsafe and impractical behaviors in unsupervised skill discovery for reinforcement learning, this paper proposes the first framework that fully integrates human value alignment constraints throughout the skill discovery process. Methodologically, it unifies inverse reinforcement learning with preference modeling to incorporate human feedback, designs a multi-objective gradient optimization algorithm, and introduces an adjustable-strength alignment mechanism to jointly optimize skill diversity and value alignment. Evaluated on 2D navigation and SafetyGymnasium benchmarks, the approach significantly improves skill safety and practicality: downstream task transfer success rates increase by 37%, while maintaining high behavioral diversity and effectively avoiding unaligned, unproductive exploration.
📝 Abstract
Unsupervised skill discovery in Reinforcement Learning aims to mimic humans' ability to autonomously discover diverse behaviors. However, existing methods are often unconstrained, making it difficult to find useful skills, especially in complex environments, where discovered skills are frequently unsafe or impractical. We address this issue by proposing Human-aligned Skill Discovery (HaSD), a framework that incorporates human feedback to discover safer, more aligned skills. HaSD simultaneously optimises skill diversity and alignment with human values. This approach ensures that alignment is maintained throughout the skill discovery process, eliminating the inefficiencies associated with exploring unaligned skills. We demonstrate its effectiveness in both 2D navigation and SafetyGymnasium environments, showing that HaSD discovers diverse, human-aligned skills that are safe and useful for downstream tasks. Finally, we extend HaSD by learning a range of configurable skills with varying degrees of diversity alignment trade-offs that could be useful in practical scenarios.