π€ AI Summary
Existing robotic motion planning approaches struggle to simultaneously ensure safety, dynamic feasibility, and scalability to high degrees of freedom. This work proposes the first GPU-native unified framework that integrates B-spline-based trajectory optimization with dynamic constraints,ζ·±εΊ¦θε distance fields combining TSDF and ESDF representations, topology-aware kinematics, and differentiable inverse dynamics. The framework further introduces a Map-Reduce-based self-collision detection scheme and a scalable CUDA architecture, enabling efficient motion generation for 48-DoF humanoid robots. Experimental results demonstrate significant improvements: task success rate increases to 99.7% (compared to 72β77% for baselines), collision-free inverse kinematics achieves 99.6% success, constraint satisfaction in retargeting reaches 89.5% (versus 61% with PyRoki), motion tracking error is reduced by 21%, and cross-seed variance decreases by a factor of 12.
π Abstract
Effective robot autonomy requires motion generation that is safe, feasible, and reactive. Current methods are fragmented: fast planners output physically unexecutable trajectories, reactive controllers struggle with high-fidelity perception, and existing solvers fail on high-DoF systems. We present cuRoboV2, a unified framework with three key innovations: (1) B-spline trajectory optimization that enforces smoothness and torque limits; (2) a GPU-native TSDF/ESDF perception pipeline that generates dense signed distance fields covering the full workspace, unlike existing methods that only provide distances within sparsely allocated blocks, up to 10x faster and in 8x less memory than the state-of-the-art at manipulation scale, with up to 99% collision recall; and (3) scalable GPU-native whole-body computation, namely topology-aware kinematics, differentiable inverse dynamics, and map-reduce self-collision, that achieves up to 61x speedup while also extending to high-DoF humanoids (where previous GPU implementations fail). On benchmarks, cuRoboV2 achieves 99.7% success under 3kg payload (where baselines achieve only 72--77%), 99.6% collision-free IK on a 48-DoF humanoid (where prior methods fail entirely), and 89.5% retargeting constraint satisfaction (vs. 61% for PyRoki); these collision-free motions yield locomotion policies with 21% lower tracking error than PyRoki and 12x lower cross-seed variance than mink. A ground-up codebase redesign for discoverability enabled LLM coding assistants to author up to 73% of new modules, including hand-optimized CUDA kernels, demonstrating that well-structured robotics code can unlock productive human--LLM collaboration. Together, these advances provide a unified, dynamics-aware motion generation stack that scales from single-arm manipulators to full humanoids.