Generalizable Geometric Prior and Recurrent Spiking Feature Learning for Humanoid Robot Manipulation

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to humanoid robot manipulation are limited in generalization and practicality due to insufficient scene understanding and inefficient imitation learning. This work proposes RGMP-S, a framework that integrates lightweight 2D geometric-prior-guided multimodal perception, vision-language models, and recurrent spiking neural networks to unify high-level semantic reasoning with low-level action generation. By incorporating geometric inductive biases, RGMP-S achieves efficient 3D scene understanding, while its modeling of long-horizon dynamic features enhances sample efficiency. Evaluated on the ManiSkill simulation benchmark and three heterogeneous real-world robotic platforms, RGMP-S significantly outperforms state-of-the-art methods in unseen environments, demonstrating superior generalization capability and data efficiency.

Technology Category

Application Category

📝 Abstract
Humanoid robot manipulation is a crucial research area for executing diverse human-level tasks, involving high-level semantic reasoning and low-level action generation. However, precise scene understanding and sample-efficient learning from human demonstrations remain critical challenges, severely hindering the applicability and generalizability of existing frameworks. This paper presents a novel RGMP-S, Recurrent Geometric-prior Multimodal Policy with Spiking features, facilitating both high-level skill reasoning and data-efficient motion synthesis. To ground high-level reasoning in physical reality, we leverage lightweight 2D geometric inductive biases to enable precise 3D scene understanding within the vision-language model. Specifically, we construct a Long-horizon Geometric Prior Skill Selector that effectively aligns the semantic instructions with spatial constraints, ultimately achieving robust generalization in unseen environments. For the data efficiency issue in robotic action generation, we introduce a Recursive Adaptive Spiking Network. We parameterize robot-object interactions via recursive spiking for spatiotemporal consistency, fully distilling long-horizon dynamic features while mitigating the overfitting issue in sparse demonstration scenarios. Extensive experiments are conducted across the Maniskill simulation benchmark and three heterogeneous real-world robotic systems, encompassing a custom-developed humanoid, a desktop manipulator, and a commercial robotic platform. Empirical results substantiate the superiority of our method over state-of-the-art baselines and validate the efficacy of the proposed modules in diverse generalization scenarios. To facilitate reproducibility, the source code and video demonstrations are publicly available at https://github.com/xtli12/RGMP-S.git.
Problem

Research questions and friction points this paper is trying to address.

humanoid robot manipulation
scene understanding
sample-efficient learning
generalization
human demonstration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric Prior
Spiking Neural Network
Humanoid Manipulation
Sample-Efficient Learning
Multimodal Policy
🔎 Similar Papers
No similar papers found.