🤖 AI Summary
To address the low evolutionary efficiency and poor parallelizability of MAP-Elites (ME) in high-dimensional neural policy spaces, this paper proposes a decentralized, actor-critic-free parallel quality diversity (QD) framework. Methodologically, it introduces (1) a novel time-step performance-driven behavioral mutation mechanism that eliminates reliance on centralized policy gradient training, and (2) the integration of behavioral representation modeling with distributed mutation sampling, enabling GPU-level parallelism within the MAP-Elites architecture. Empirically, the framework generates high-quality, diverse deep neural network policies on a single GPU in under 250 seconds—five times faster than state-of-the-art methods—while maintaining competitive sample efficiency. This advancement significantly improves the scalability and real-time applicability of QD algorithms in large-scale neural policy search.
📝 Abstract
Quality-Diversity optimization comprises a family of evolutionary algorithms aimed at generating a collection of diverse and high-performing solutions. MAP-Elites (ME), a notable example, is used effectively in fields like evolutionary robotics. However, the reliance of ME on random mutations from Genetic Algorithms limits its ability to evolve high-dimensional solutions. Methods proposed to overcome this include using gradient-based operators like policy gradients or natural evolution strategies. While successful at scaling ME for neuroevolution, these methods often suffer from slow training speeds, or difficulties in scaling with massive parallelization due to high computational demands or reliance on centralized actor-critic training. In this work, we introduce a fast, sample-efficient ME based algorithm capable of scaling up with massive parallelization, significantly reducing runtimes without compromising performance. Our method, ASCII-ME, unlike existing policy gradient quality-diversity methods, does not rely on centralized actor-critic training. It performs behavioral variations based on time step performance metrics and maps these variations to solutions using policy gradients. Our experiments show that ASCII-ME can generate a diverse collection of high-performing deep neural network policies in less than 250 seconds on a single GPU. Additionally, it operates on average, five times faster than state-of-the-art algorithms while still maintaining competitive sample efficiency.