🤖 AI Summary
Existing cellular automata (CA) research is hindered by the absence of a general-purpose, hardware-accelerated, open-source library, limiting reproducibility, collaboration, and exploration of novel paradigms.
Method: We introduce the first JAX-based, hardware-accelerated CA metacompiler—designed functionally with native JAX primitives—supporting arbitrary dimensions, discrete/continuous states, and hybrid state types. It leverages XLA compilation, vmap/pmap parallelization, and automatic differentiation. Its modular CA kernel and dynamic-dimension mechanism lower experimental barriers significantly.
Contribution/Results: Three novel CA experiments—including 1D-ARC outperforming GPT-4—are implementable in just a few lines of code. On multi-task benchmarks (elementary CAs, neural CAs, MNIST self-classification), it achieves up to 2000× speedup over conventional implementations. This infrastructure enables efficient, scalable, and reproducible CA foundational research and AI–CA interdisciplinary exploration.
📝 Abstract
Cellular automata have become a cornerstone for investigating emergence and self-organization across diverse scientific disciplines, spanning neuroscience, artificial life, and theoretical physics. However, the absence of a hardware-accelerated cellular automata library limits the exploration of new research directions, hinders collaboration, and impedes reproducibility. In this work, we introduce CAX (Cellular Automata Accelerated in JAX), a high-performance and flexible open-source library designed to accelerate cellular automata research. CAX offers cutting-edge performance and a modular design through a user-friendly interface, and can support both discrete and continuous cellular automata with any number of dimensions. We demonstrate CAX's performance and flexibility through a wide range of benchmarks and applications. From classic models like elementary cellular automata and Conway's Game of Life to advanced applications such as growing neural cellular automata and self-classifying MNIST digits, CAX speeds up simulations up to 2,000 times faster. Furthermore, we demonstrate CAX's potential to accelerate research by presenting a collection of three novel cellular automata experiments, each implemented in just a few lines of code thanks to the library's modular architecture. Notably, we show that a simple one-dimensional cellular automaton can outperform GPT-4 on the 1D-ARC challenge.