🤖 AI Summary
To address the weak generalization of single-policy approaches and rigid resource allocation caused by task difficulty heterogeneity in multi-task reinforcement learning, this paper proposes a module-level dynamic model evolution framework based on genetic algorithms. The framework employs variable-length binary-encoded genotypes to represent modular neural network architectures, enabling gradient-free optimization, on-demand functional module insertion, and in-training structural self-adaptation—thereby achieving fine-grained adaptation to task complexity. Evaluated on the Meta-World benchmark, our method achieves state-of-the-art performance, significantly improving cross-task generalization and success rates on high-difficulty tasks. Notably, it is the first approach to realize end-to-end, data-driven, dynamic evolution of modular architectures in reinforcement learning—where both architecture and policy parameters co-evolve adaptively during training.
📝 Abstract
Multi-task reinforcement learning employs a single policy to complete various tasks, aiming to develop an agent with generalizability across different scenarios. Given the shared characteristics of tasks, the agent's learning efficiency can be enhanced through parameter sharing. Existing approaches typically use a routing network to generate specific routes for each task and reconstruct a set of modules into diverse models to complete multiple tasks simultaneously. However, due to the inherent difference between tasks, it is crucial to allocate resources based on task difficulty, which is constrained by the model's structure. To this end, we propose a Model Evolution framework with Genetic Algorithm (MEGA), which enables the model to evolve during training according to the difficulty of the tasks. When the current model is insufficient for certain tasks, the framework will automatically incorporate additional modules, enhancing the model's capabilities. Moreover, to adapt to our model evolution framework, we introduce a genotype module-level model, using binary sequences as genotype policies for model reconstruction, while leveraging a non-gradient genetic algorithm to optimize these genotype policies. Unlike routing networks with fixed output dimensions, our approach allows for the dynamic adjustment of the genotype policy length, enabling it to accommodate models with a varying number of modules. We conducted experiments on various robotics manipulation tasks in the Meta-World benchmark. Our state-of-the-art performance demonstrated the effectiveness of the MEGA framework. We will release our source code to the public.