🤖 AI Summary
This study addresses the insufficient perceptual representation capability of humanoid robots in embodied intelligence. We propose the Frequency-Enhanced Wavelet Transformer (FEWT) framework, which jointly models time-frequency features: temporal multi-scale decomposition is achieved via Time-Series Discrete Wavelet Transform (TS-DWT), while frequency-enhanced efficient multi-scale attention (FE-EMA) and residual cross-scale information aggregation enable human-like motion modeling under imitation learning. Compared to the ACT baseline, FEWT improves task success rates by 30% in simulation and by 6–12% in real-world scenarios, significantly enhancing model robustness and action reproduction fidelity. The core contribution lies in the deep integration of wavelet analysis and Transformer architectures, enabling, for the first time, frequency-domain-guided dynamic multi-scale attention modeling.
📝 Abstract
The embodied intelligence bridges the physical world and information space. As its typical physical embodiment, humanoid robots have shown great promise through robot learning algorithms in recent years. In this study, a hardware platform, including humanoid robot and exoskeleton-style teleoperation cabin, was developed to realize intuitive remote manipulation and efficient collection of anthropomorphic action data. To improve the perception representation of humanoid robot, an imitation learning framework, termed Frequency-Enhanced Wavelet-based Transformer (FEWT), was proposed, which consists of two primary modules: Frequency-Enhanced Efficient Multi-Scale Attention (FE-EMA) and Time-Series Discrete Wavelet Transform (TS-DWT). By combining multi-scale wavelet decomposition with the residual network, FE-EMA can dynamically fuse features from both time-domain and frequency-domain. This fusion is able to capture feature information across various scales effectively, thereby enhancing model robustness. Experimental performance demonstrates that FEWT improves the success rate of the state-of-the-art algorithm (Action Chunking with Transformers, ACT baseline) by up to 30% in simulation and by 6-12% in real-world.