🤖 AI Summary
This work addresses the challenges of low training efficiency and poor robustness in robotic imitation learning, which stem from redundant keyframes, uneven temporal distribution, and difficulties in recognizing dark-colored objects under multi-view projections. To overcome these issues, the paper introduces three core innovations: a task-guided dynamic keyframe sampling strategy, a color-inverted multi-view projection module, and a task-aware fusion mechanism that integrates point clouds with action heatmaps. The proposed approach enables end-to-end vision-language-action modeling and achieves state-of-the-art performance on the RLBench and COLOSSEUM benchmarks, attaining success rates of 90.5% and 68.8%, respectively. Furthermore, it reduces memory consumption by 80% and accelerates training by a factor of five compared to existing methods.
📝 Abstract
The performance of robotic imitation learning is fundamentally limited by data quality and training strategies. Prevalent sampling strategies on RLBench suffer from severe keyframe redundancy and imbalanced temporal distribution, leading to inefficient memory usage and unstable optimization. Moreover, reprojecting point clouds onto multi-view images with a black background--while more efficient than voxel-based methods--often causes dark objects to be indistinguishable and hard to manipulate. In this work, we propose a novel holistic framework that significantly improves both model performance and training efficiency. First, we redesign and optimize the keyframe sampling strategy, reducing memory consumption by 80% and accelerating training speed by 5x. Second, we augment the model with a color inversion projection branch--a simple yet effective module that resolves the ambiguity of dark objects. Finally, we propose a task-guided mixup technique that dynamically fuses point clouds and action heatmaps according to task instructions, greatly improving robustness to distractors and performance in multi-goal scenarios. Extensive experiments demonstrate that our method achieves state-of-the-art performance with a 90.5% success rate on RLBench and 68.8% on the COLOSSEUM benchmark under challenging interference conditions. Our code and checkpoints are available at https://github.com/PuFanqi23/TGM-VLA.