Hyper-GoalNet: Goal-Conditioned Manipulation Policy Learning with HyperNetworks

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization and unstable performance of goal-conditioned policies across diverse environments and tasks, this paper proposes a hypernetwork-based goal-conditioned policy learning framework. Methodologically: (1) goal information dynamically generates policy network parameters, decoupling goal encoding from state processing; (2) forward dynamics are jointly modeled in a latent space with an explicit distance-based constraint to ensure monotonic convergence toward the goal state. Technically, the approach integrates hypernetwork architecture, goal-conditioned policy representation, forward dynamics modeling, and distance-driven latent-space regularization. Experiments demonstrate that our method significantly outperforms existing baselines across multiple robotic manipulation tasks—particularly under high environmental stochasticity. Real-robot evaluations further confirm its strong robustness to sensor noise and physical uncertainties.

Technology Category

Application Category

📝 Abstract
Goal-conditioned policy learning for robotic manipulation presents significant challenges in maintaining performance across diverse objectives and environments. We introduce Hyper-GoalNet, a framework that generates task-specific policy network parameters from goal specifications using hypernetworks. Unlike conventional methods that simply condition fixed networks on goal-state pairs, our approach separates goal interpretation from state processing -- the former determines network parameters while the latter applies these parameters to current observations. To enhance representation quality for effective policy generation, we implement two complementary constraints on the latent space: (1) a forward dynamics model that promotes state transition predictability, and (2) a distance-based constraint ensuring monotonic progression toward goal states. We evaluate our method on a comprehensive suite of manipulation tasks with varying environmental randomization. Results demonstrate significant performance improvements over state-of-the-art methods, particularly in high-variability conditions. Real-world robotic experiments further validate our method's robustness to sensor noise and physical uncertainties. Code is available at: https://github.com/wantingyao/hyper-goalnet.
Problem

Research questions and friction points this paper is trying to address.

Develops goal-conditioned policy learning for robotic manipulation tasks
Addresses performance maintenance across diverse objectives and environments
Enhances robustness to sensor noise and physical uncertainties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hypernetworks generate task-specific policy parameters from goals
Separates goal interpretation from state processing via dual pathways
Implements latent space constraints for dynamics and goal progression
🔎 Similar Papers
No similar papers found.
P
Pei Zhou
InfoBodied AI Lab, The University of Hong Kong
W
Wanting Yao
InfoBodied AI Lab, The University of Hong Kong, University of Pennsylvania
Qian Luo
Qian Luo
InfoBodied AI Lab, The University of Hong Kong
X
Xunzhe Zhou
InfoBodied AI Lab, The University of Hong Kong
Yanchao Yang
Yanchao Yang
Assistant Professor, HKU; Stanford University; UCLA
Embodied AIComputer VisionMachine Learning