GMT: Goal-Conditioned Multimodal Transformer for 6-DOF Object Trajectory Synthesis in 3D Scenes

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches struggle to generate physically plausible and precise 6-degree-of-freedom (6-DOF) object manipulation trajectories in complex scenes due to their reliance on 2D or partial 3D representations. This work proposes a multimodal Transformer framework that, for the first time, jointly models 3D bounding boxes, point clouds, semantic categories, and target end-effector poses in an end-to-end manner, producing continuous 6-DOF trajectories in a sequential fashion. Through a tailored conditional fusion strategy, the method significantly outperforms strong baselines such as CHOIS and GIMO on both synthetic and real-world datasets, achieving notable advances in both positional accuracy and orientation control. These results establish a new benchmark for learning-based manipulation planning.

Technology Category

Application Category

📝 Abstract
Synthesizing controllable 6-DOF object manipulation trajectories in 3D environments is essential for enabling robots to interact with complex scenes, yet remains challenging due to the need for accurate spatial reasoning, physical feasibility, and multimodal scene understanding. Existing approaches often rely on 2D or partial 3D representations, limiting their ability to capture full scene geometry and constraining trajectory precision. We present GMT, a multimodal transformer framework that generates realistic and goal-directed object trajectories by jointly leveraging 3D bounding box geometry, point cloud context, semantic object categories, and target end poses. The model represents trajectories as continuous 6-DOF pose sequences and employs a tailored conditioning strategy that fuses geometric, semantic, contextual, and goaloriented information. Extensive experiments on synthetic and real-world benchmarks demonstrate that GMT outperforms state-of-the-art human motion and human-object interaction baselines, such as CHOIS and GIMO, achieving substantial gains in spatial accuracy and orientation control. Our method establishes a new benchmark for learningbased manipulation planning and shows strong generalization to diverse objects and cluttered 3D environments. Project page: https://huajian- zeng.github. io/projects/gmt/.
Problem

Research questions and friction points this paper is trying to address.

6-DOF trajectory synthesis
3D scene understanding
object manipulation
spatial reasoning
multimodal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

6-DOF trajectory synthesis
multimodal transformer
goal-conditioned generation
3D scene understanding
object manipulation planning
🔎 Similar Papers
2024-07-16Neural Information Processing SystemsCitations: 16