🤖 AI Summary
This study addresses the lack of systematic investigation into optimal sensor configurations for humanoid robots to efficiently perform imitation learning manipulation tasks under data-constrained conditions. Building upon the Action Chunking with Transformers (ACT) framework on the Unitree G1 platform, this work proposes the first open-source, unified ablation evaluation methodology, conducting masked experiments across 14 multimodal sensor combinations incorporating active stereo vision, tactile sensing, and proprioception. The results demonstrate that a streamlined active stereo vision setup achieves success rates of 87.5% and 94.4% on spatial generalization and structured manipulation tasks, respectively—significantly outperforming more complex multisensor systems. Notably, integrating pressure sensors degrades performance to 67.3%, revealing that redundant modalities with low signal-to-noise ratios can impair learning efficacy.
📝 Abstract
The complexity of teaching humanoid robots new tasks is one of the major reasons hindering their widespread adoption in the industry. While Imitation Learning (IL), particularly Action Chunking with Transformers (ACT), enables rapid task acquisition, there is no consensus yet on the optimal sensory hardware required for manipulation tasks. This paper benchmarks 14 sensor combinations on the Unitree G1 humanoid robot equipped with three-finger hands for two manipulation tasks. We explicitly evaluate the integration of tactile and proprioceptive modalities alongside active vision. Our analysis demonstrates that strategic sensor selection can outperform complex configurations in data-limited regimes while reducing computational overhead. We develop an open-source Unified Ablation Framework that utilizes sensor masking on a comprehensive master dataset. Results indicate that additional modalities often degrade performance for IL with limited data. A minimal active stereo-camera setup outperformed complex multi-sensor configurations, achieving 87.5% success in a spatial generalization task and 94.4% in a structured manipulation task. Conversely, adding pressure sensors to this setup reduced success to 67.3% in the latter task due to a low signal-to-noise ratio. We conclude that in data-limited regimes, active vision offers a superior trade-off between robustness and complexity. While tactile modalities may require larger datasets to be effective, our findings validate that strategic sensor selection is critical for designing an efficient learning process.