🤖 AI Summary
NEAT suffers from poor computational efficiency, limiting its scalability. To address this, we propose the first fully quantized, GPU-accelerated NEAT library, which uniformly models heterogeneous topologies and evolutionary operations as batched tensors, enabling population-level parallel evolution. We introduce the first end-to-end tensorized framework supporting joint evolution of topology and weights—compatible with CPPNs, HyperNEAT, and multi-physics simulators (e.g., Brax and Gymnax). Leveraging JAX, our implementation achieves automatic vectorization and hardware acceleration. On Brax-based robotic control benchmarks, our method achieves up to 500× speedup over NEAT-Python, dramatically enhancing the scalability of large-scale neuroevolution. The code is publicly available.
📝 Abstract
The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. To address these limitations, this paper introduces TensorNEAT, a GPU-accelerated library that applies tensorization to the NEAT algorithm. Tensorization reformulates NEAT's diverse network topologies and operations into uniformly shaped tensors, enabling efficient parallel execution across entire populations. TensorNEAT is built upon JAX, leveraging automatic function vectorization and hardware acceleration to significantly enhance computational efficiency. In addition to NEAT, the library supports variants such as CPPN and HyperNEAT, and integrates with benchmark environments like Gym, Brax, and gymnax. Experimental evaluations across various robotic control environments in Brax demonstrate that TensorNEAT delivers up to 500x speedups compared to existing implementations, such as NEAT-Python. The source code for TensorNEAT is publicly available at: https://github.com/EMI-Group/tensorneat.