MaskPlanner: Learning-Based Object-Centric Motion Generation from 3D Point Clouds

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing object-centric motion generation (OCMG) methods for industrial multi-objective long-horizon motion planning exhibit poor adaptability—relying on heuristic design, expensive optimization, or strong geometric assumptions—thus failing to generalize to real-world scenarios. Method: We propose the first end-to-end, point-cloud-driven OCMG framework that directly learns object-centered trajectories from unstructured 3D point clouds, without geometric priors or hand-crafted optimization. We introduce a novel path mask mechanism enabling joint local path segment generation and global path grouping in a single forward pass, unifying geometric awareness with task-level semantics. The architecture integrates local neighborhood feature extraction, path segment regression, and differentiable path clustering. Results: On real-world robotic spray-painting tasks, our method achieves >99% surface coverage on unseen objects; the generated 6-DoF trajectories execute directly on physical robots and yield expert-level coating quality.

Technology Category

Application Category

📝 Abstract
Object-Centric Motion Generation (OCMG) plays a key role in a variety of industrial applications$unicode{x2014}$such as robotic spray painting and welding$unicode{x2014}$requiring efficient, scalable, and generalizable algorithms to plan multiple long-horizon trajectories over free-form 3D objects. However, existing solutions rely on specialized heuristics, expensive optimization routines, or restrictive geometry assumptions that limit their adaptability to real-world scenarios. In this work, we introduce a novel, fully data-driven framework that tackles OCMG directly from 3D point clouds, learning to generalize expert path patterns across free-form surfaces. We propose MaskPlanner, a deep learning method that predicts local path segments for a given object while simultaneously inferring"path masks"to group these segments into distinct paths. This design induces the network to capture both local geometric patterns and global task requirements in a single forward pass. Extensive experimentation on a realistic robotic spray painting scenario shows that our approach attains near-complete coverage (above 99%) for unseen objects, while it remains task-agnostic and does not explicitly optimize for paint deposition. Moreover, our real-world validation on a 6-DoF specialized painting robot demonstrates that the generated trajectories are directly executable and yield expert-level painting quality. Our findings crucially highlight the potential of the proposed learning method for OCMG to reduce engineering overhead and seamlessly adapt to several industrial use cases.
Problem

Research questions and friction points this paper is trying to address.

Generates object-centric motion from 3D point clouds
Predicts and groups local path segments efficiently
Achieves expert-level quality in robotic applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning for motion generation
Path masks for segment grouping
Executable trajectories from point clouds
🔎 Similar Papers
No similar papers found.