🤖 AI Summary
Conventional neural networks rely on external optimizers and lack intrinsic mechanisms for autonomous evolution. Method: We propose Self-Referential Graph Hypernetworks (SR-GHNs), which internalize evolutionary dynamics—including mutation rates and population dynamics—as learnable network parameters. By integrating hypernetworks, stochastic parameter generation, graph neural networks, and policy-gradient reinforcement learning, SR-GHNs form a self-referential architecture capable of self-evaluation, mutation, and inheritance. Contribution/Results: Evaluated on dynamic environment benchmarks—CartPoleSwitch, LunarLander-Switch, and Ant-v5—SR-GHNs demonstrate rapid, robust adaptation to environmental switches. In Ant-v5, the model autonomously evolves coordinated gaits and leverages diversity-aware regulation to converge to high-quality policies. These results empirically validate the spontaneous emergence of evolvability and establish the feasibility of open-ended autonomous learning in neural systems.
📝 Abstract
How can neural networks evolve themselves without relying on external optimizers? We propose Self-Referential Graph HyperNetworks, systems where the very machinery of variation and inheritance is embedded within the network. By uniting hypernetworks, stochastic parameter generation, and graph-based representations, Self-Referential GHNs mutate and evaluate themselves while adapting mutation rates as selectable traits. Through new reinforcement learning benchmarks with environmental shifts (CartPoleSwitch, LunarLander-Switch), Self-Referential GHNs show swift, reliable adaptation and emergent population dynamics. In the locomotion benchmark Ant-v5, they evolve coherent gaits, showing promising fine-tuning capabilities by autonomously decreasing variation in the population to concentrate around promising solutions. Our findings support the idea that evolvability itself can emerge from neural self-reference. Self-Referential GHNs reflect a step toward synthetic systems that more closely mirror biological evolution, offering tools for autonomous, open-ended learning agents.