Low Rank Factorizations are Indirect Encodings for Deep Neuroevolution

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the inefficiency of standard neural evolution (NE) arising from its vast, unconstrained weight search space. We propose Low-Rank Decomposed Neural Evolution (LR-NE), which indirectly encodes network weights via low-rank factors—thereby imposing structural priors and drastically reducing search space dimensionality while preserving representational capacity. To our knowledge, this is the first integration of low-rank structural inductive bias into an NE framework, enabling parameter-efficient indirect encoding. Furthermore, we identify and exploit a strong selective suppression effect of factorized mutations: they disproportionately impair low-fitness individuals, accelerating convergence. Empirically, LR-NE achieves superior performance over baseline NE on language modeling tasks; on vision-based reinforcement learning (in both continuous and discrete environments), it matches state-of-the-art performance while reducing computational overhead by ~40% via early elimination of poor candidates. The core contribution lies in the principled unification of low-rank decomposition with evolutionary optimization—balancing expressivity, search efficiency, and generalization.

Technology Category

Application Category

📝 Abstract
Deep neuroevolution is a highly scalable alternative to reinforcement learning due to its unique ability to encode network updates in a small number of bytes. Recent insights from traditional deep learning indicate high-dimensional models possess intrinsic, low-rank structure. In this work, we introduce low-rank, factorized neuroevolution: an indirect encoding through which we can search a small space of low-rank factors that enforce underlying structure across a network's weights. We compare our approach with non-factorized networks of similar and smaller size to understand how much performance can be attributed to the smaller search space. We evaluate our method on a language modeling task using transformers, as well as continuous and discrete vision-based reinforcement learning tasks. Our study shows that low-rank, factorized neuroevolution outperforms or is competitive with non-factorized neuroevolution, performing notably well on language modeling. Our results also suggest deleterious factorized mutations have a stronger negative impact on performance than deleterious non-factorized mutations, which significantly reduces the runtime on environments with early termination for bad performers. More broadly, these results show how we can use insights from backpropgation-based methods to enhance neuroevolution
Problem

Research questions and friction points this paper is trying to address.

Explores low-rank factorizations for scalable deep neuroevolution
Compares factorized vs non-factorized networks in performance
Evaluates method on language modeling and RL tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank factorized neuroevolution for structured weights
Small search space enhances performance efficiency
Outperforms non-factorized methods in language modeling
🔎 Similar Papers
No similar papers found.