Guiding Skill Discovery with Foundation Models

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing skill discovery methods overemphasize diversity, often yielding behaviors that violate human preferences or even pose safety risks. This paper introduces the first framework to incorporate foundation models (FMs) into skill discovery—eliminating the need for hand-crafted reward functions. Instead, it leverages the FM’s zero-shot reasoning capability to implicitly infer a human preference scoring function directly from state (or pixel) inputs, dynamically reshaping weighted rewards to suppress unsafe behaviors (e.g., tumbling, rollovers). During policy optimization, our approach jointly discovers skills that are both safe and diverse, offering strong generalization and practicality: it transfers seamlessly to new tasks without additional annotation or fine-tuning. Experiments demonstrate significant improvements in skill safety and human alignment, while also enabling modeling of desired behaviors that are difficult to specify explicitly.

Technology Category

Application Category

📝 Abstract
Learning diverse skills without hand-crafted reward functions could accelerate reinforcement learning in downstream tasks. However, existing skill discovery methods focus solely on maximizing the diversity of skills without considering human preferences, which leads to undesirable behaviors and possibly dangerous skills. For instance, a cheetah robot trained using previous methods learns to roll in all directions to maximize skill diversity, whereas we would prefer it to run without flipping or entering hazardous areas. In this work, we propose a Foundation model Guided (FoG) skill discovery method, which incorporates human intentions into skill discovery through foundation models. Specifically, FoG extracts a score function from foundation models to evaluate states based on human intentions, assigning higher values to desirable states and lower to undesirable ones. These scores are then used to re-weight the rewards of skill discovery algorithms. By optimizing the re-weighted skill discovery rewards, FoG successfully learns to eliminate undesirable behaviors, such as flipping or rolling, and to avoid hazardous areas in both state-based and pixel-based tasks. Interestingly, we show that FoG can discover skills involving behaviors that are difficult to define. Interactive visualisations are available from https://sites.google.com/view/submission-fog.
Problem

Research questions and friction points this paper is trying to address.

Learning diverse skills without hand-crafted reward functions
Addressing undesirable behaviors from unconstrained skill discovery
Incorporating human preferences into autonomous skill learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses foundation models to incorporate human intentions
Reweights skill discovery rewards with model scores
Eliminates undesirable behaviors in state and pixel tasks
🔎 Similar Papers
No similar papers found.