TGM-VLA: Task-Guided Mixup for Sampling-Efficient and Robust Robotic Manipulation

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of low training efficiency and poor robustness in robotic imitation learning, which stem from redundant keyframes, uneven temporal distribution, and difficulties in recognizing dark-colored objects under multi-view projections. To overcome these issues, the paper introduces three core innovations: a task-guided dynamic keyframe sampling strategy, a color-inverted multi-view projection module, and a task-aware fusion mechanism that integrates point clouds with action heatmaps. The proposed approach enables end-to-end vision-language-action modeling and achieves state-of-the-art performance on the RLBench and COLOSSEUM benchmarks, attaining success rates of 90.5% and 68.8%, respectively. Furthermore, it reduces memory consumption by 80% and accelerates training by a factor of five compared to existing methods.

Technology Category

Application Category

📝 Abstract
The performance of robotic imitation learning is fundamentally limited by data quality and training strategies. Prevalent sampling strategies on RLBench suffer from severe keyframe redundancy and imbalanced temporal distribution, leading to inefficient memory usage and unstable optimization. Moreover, reprojecting point clouds onto multi-view images with a black background--while more efficient than voxel-based methods--often causes dark objects to be indistinguishable and hard to manipulate. In this work, we propose a novel holistic framework that significantly improves both model performance and training efficiency. First, we redesign and optimize the keyframe sampling strategy, reducing memory consumption by 80% and accelerating training speed by 5x. Second, we augment the model with a color inversion projection branch--a simple yet effective module that resolves the ambiguity of dark objects. Finally, we propose a task-guided mixup technique that dynamically fuses point clouds and action heatmaps according to task instructions, greatly improving robustness to distractors and performance in multi-goal scenarios. Extensive experiments demonstrate that our method achieves state-of-the-art performance with a 90.5% success rate on RLBench and 68.8% on the COLOSSEUM benchmark under challenging interference conditions. Our code and checkpoints are available at https://github.com/PuFanqi23/TGM-VLA.
Problem

Research questions and friction points this paper is trying to address.

robotic imitation learning
keyframe redundancy
temporal distribution imbalance
dark object ambiguity
sampling efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-Guided Mixup
Keyframe Sampling Optimization
Color Inversion Projection
Robotic Imitation Learning
Point Cloud Fusion
F
Fanqi Pu
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
L
Lei Jiang
The National and Local Co-Build Humanoid Robotics Innovation Center, Shanghai, China
Wenming Yang
Wenming Yang
Tsinghua University
Computer VisionImage Processing