iKap: Kinematics-aware Planning with Imperative Learning

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing robotic trajectory planning approaches suffer from high latency and error propagation in modular systems, or generate infeasible trajectories due to the neglect of kinematic constraints in purely data-driven methods. To address these issues, this paper proposes a kinematics-aware end-to-end trajectory planning framework. Its core innovation lies in embedding an explicit robot kinematic model into a differentiable bilevel optimization architecture, enabling self-supervised, gradient-based trajectory learning: the upper level optimizes for collision avoidance and task objectives, while the lower level enforces kinematic feasibility via a differentiable state transition. The method requires no ground-truth trajectory annotations and is compatible with diverse downstream controllers. Experiments demonstrate significant improvements over state-of-the-art approaches—achieving a +12.7% increase in task success rate and a −38% reduction in planning latency—under complex, cluttered scenarios.

Technology Category

Application Category

📝 Abstract
Trajectory planning in robotics aims to generate collision-free pose sequences that can be reliably executed. Recently, vision-to-planning systems have gained increasing attention for their efficiency and ability to interpret and adapt to surrounding environments. However, traditional modular systems suffer from increased latency and error propagation, while purely data-driven approaches often overlook the robot's kinematic constraints. This oversight leads to discrepancies between planned trajectories and those that are executable. To address these challenges, we propose iKap, a novel vision-to-planning system that integrates the robot's kinematic model directly into the learning pipeline. iKap employs a self-supervised learning approach and incorporates the state transition model within a differentiable bi-level optimization framework. This integration ensures the network learns collision-free waypoints while satisfying kinematic constraints, enabling gradient back-propagation for end-to-end training. Our experimental results demonstrate that iKap achieves higher success rates and reduced latency compared to the state-of-the-art methods. Besides the complete system, iKap offers a visual-to-planning network that seamlessly works with various controllers, providing a robust solution for robots navigating complex environments.
Problem

Research questions and friction points this paper is trying to address.

Integrates kinematic model into learning to ensure executable trajectories
Reduces latency and error in vision-to-planning robotic systems
Ensures collision-free waypoints while satisfying kinematic constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates kinematic model into learning pipeline
Uses self-supervised bi-level optimization framework
Ensures collision-free waypoints with kinematic constraints
🔎 Similar Papers
Q
Qihang Li
Spatial AI & Robotics (SAIR) Lab, University at Buffalo, USA
Zhuoqun Chen
Zhuoqun Chen
Duke University
RoboticsReinforcement Learning
H
Haoze Zheng
Spatial AI & Robotics (SAIR) Lab, University at Buffalo, USA
H
Haonan He
Carnegie Mellon University, USA
Shaoshu Su
Shaoshu Su
PhD Student, University at Buffalo, SUNY
SLAMMachine LearningMPCMulti Agent System
Junyi Geng
Junyi Geng
Assistant Professor, Pennsylvania State University
aerial roboticscooperative controltrajectory planningvision-based navigationmachine learning
C
Chen Wang
Spatial AI & Robotics (SAIR) Lab, University at Buffalo, USA