UniBYD: A Unified Framework for Learning Robotic Manipulation Across Embodiments Beyond Imitation of Human Demonstrations

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In embodied intelligence, morphological discrepancies between robotic and human hands severely limit imitation learning performance. This paper proposes UniBYD, a framework that transcends the conventional “copycat imitation” paradigm to enable robot-centric autonomous manipulation policy learning. Its core contributions are: (1) a Unified Morphological Representation (UMR) that captures kinematic commonalities across diverse hand geometries; and (2) a dynamic PPO algorithm integrating annealed reward scheduling with a hybrid Markovian shadow engine, enhancing policy generalization and action precision. Evaluated on the newly introduced UniManip benchmark—featuring multi-morphology manipulation tasks—UniBYD achieves a 67.90% higher success rate than state-of-the-art methods. It is the first approach to enable efficient, robust policy transfer across heterogeneous robotic hands and faithful reproduction of fine-grained manipulation behaviors.

Technology Category

Application Category

📝 Abstract
In embodied intelligence, the embodiment gap between robotic and human hands brings significant challenges for learning from human demonstrations. Although some studies have attempted to bridge this gap using reinforcement learning, they remain confined to merely reproducing human manipulation, resulting in limited task performance. In this paper, we propose UniBYD, a unified framework that uses a dynamic reinforcement learning algorithm to discover manipulation policies aligned with the robot's physical characteristics. To enable consistent modeling across diverse robotic hand morphologies, UniBYD incorporates a unified morphological representation (UMR). Building on UMR, we design a dynamic PPO with an annealed reward schedule, enabling reinforcement learning to transition from imitation of human demonstrations to explore policies adapted to diverse robotic morphologies better, thereby going beyond mere imitation of human hands. To address the frequent failures of learning human priors in the early training stage, we design a hybrid Markov-based shadow engine that enables reinforcement learning to imitate human manipulations in a fine-grained manner. To evaluate UniBYD comprehensively, we propose UniManip, the first benchmark encompassing robotic manipulation tasks spanning multiple hand morphologies. Experiments demonstrate a 67.90% improvement in success rate over the current state-of-the-art. Upon acceptance of the paper, we will release our code and benchmark at https://github.com/zhanheng-creator/UniBYD.
Problem

Research questions and friction points this paper is trying to address.

Bridging the embodiment gap between robotic and human hands for manipulation learning.
Moving beyond imitation to discover policies aligned with robot physical characteristics.
Enabling consistent modeling across diverse robotic hand morphologies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic reinforcement learning algorithm for robot-specific policies
Unified morphological representation for diverse robotic hands
Hybrid Markov-based shadow engine for fine-grained human imitation
🔎 Similar Papers
No similar papers found.
T
Tingyu Yuan
CASIA, UCAS
B
Biaoliang Guan
XJTU
W
Wen Ye
CASIA, UCAS
Z
Ziyan Tian
CASIA, UCAS
Y
Yi Yang
CSU
W
Weijie Zhou
BJTU
Y
Yan Huang
CASIA, UCAS
P
Peng Wang
CASIA, UCAS
Chaoyang Zhao
Chaoyang Zhao
Institute of Automation, Chinese Academy of Sciences
computer vision
J
Jinqiao Wang
CASIA, UCAS