🤖 AI Summary
Visual deep reinforcement learning suffers from low sample efficiency in unstructured tasks and poor transferability to real-world robots. To address these challenges, this paper proposes a novel framework integrating modular representation learning with task-aware optimization. Its key contributions are: (1) a task-oriented perturbation mechanism—first introduced in visual RL—to enhance policy robustness and generalization; and (2) the first integration of a Mixture-of-Experts (MoE) architecture into the visual RL backbone, mitigating multi-task gradient interference and improving representation disentanglement and cross-task transferability. The method achieves significant improvements over state-of-the-art approaches on three major simulation benchmarks. Crucially, it successfully transfers to a real robotic arm, completing three complex manipulation tasks with an average success rate of 83%—a 51-percentage-point gain over the best existing model-free method (32%)—demonstrating both high sample efficiency and practical deployability.
📝 Abstract
Visual deep reinforcement learning (RL) enables robots to acquire skills from visual input for unstructured tasks. However, current algorithms suffer from low sample efficiency, limiting their practical applicability. In this work, we present MENTOR, a method that improves both the architecture and optimization of RL agents. Specifically, MENTOR replaces the standard multi-layer perceptron (MLP) with a mixture-of-experts (MoE) backbone, enhancing the agent's ability to handle complex tasks by leveraging modular expert learning to avoid gradient conflicts. Furthermore, MENTOR introduces a task-oriented perturbation mechanism, which heuristically samples perturbation candidates containing task-relevant information, leading to more targeted and effective optimization. MENTOR outperforms state-of-the-art methods across three simulation domains -- DeepMind Control Suite, Meta-World, and Adroit. Additionally, MENTOR achieves an average of 83% success rate on three challenging real-world robotic manipulation tasks including peg insertion, cable routing, and tabletop golf, which significantly surpasses the success rate of 32% from the current strongest model-free visual RL algorithm. These results underscore the importance of sample efficiency in advancing visual RL for real-world robotics. Experimental videos are available at https://suninghuang19.github.io/mentor_page.