Hypernetworks That Evolve Themselves

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional neural networks rely on external optimizers and lack intrinsic mechanisms for autonomous evolution. Method: We propose Self-Referential Graph Hypernetworks (SR-GHNs), which internalize evolutionary dynamics—including mutation rates and population dynamics—as learnable network parameters. By integrating hypernetworks, stochastic parameter generation, graph neural networks, and policy-gradient reinforcement learning, SR-GHNs form a self-referential architecture capable of self-evaluation, mutation, and inheritance. Contribution/Results: Evaluated on dynamic environment benchmarks—CartPoleSwitch, LunarLander-Switch, and Ant-v5—SR-GHNs demonstrate rapid, robust adaptation to environmental switches. In Ant-v5, the model autonomously evolves coordinated gaits and leverages diversity-aware regulation to converge to high-quality policies. These results empirically validate the spontaneous emergence of evolvability and establish the feasibility of open-ended autonomous learning in neural systems.

Technology Category

Application Category

📝 Abstract
How can neural networks evolve themselves without relying on external optimizers? We propose Self-Referential Graph HyperNetworks, systems where the very machinery of variation and inheritance is embedded within the network. By uniting hypernetworks, stochastic parameter generation, and graph-based representations, Self-Referential GHNs mutate and evaluate themselves while adapting mutation rates as selectable traits. Through new reinforcement learning benchmarks with environmental shifts (CartPoleSwitch, LunarLander-Switch), Self-Referential GHNs show swift, reliable adaptation and emergent population dynamics. In the locomotion benchmark Ant-v5, they evolve coherent gaits, showing promising fine-tuning capabilities by autonomously decreasing variation in the population to concentrate around promising solutions. Our findings support the idea that evolvability itself can emerge from neural self-reference. Self-Referential GHNs reflect a step toward synthetic systems that more closely mirror biological evolution, offering tools for autonomous, open-ended learning agents.
Problem

Research questions and friction points this paper is trying to address.

Evolve neural networks without external optimizers
Adapt mutation rates as selectable traits autonomously
Enable autonomous learning through neural self-reference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Referential Graph HyperNetworks embed evolution internally
They mutate and adapt mutation rates as selectable traits
They autonomously decrease variation to concentrate on promising solutions
🔎 Similar Papers
No similar papers found.
J
Joachim Winther Pedersen
IT University of Copenhagen, Denmark
E
Erwan Plantec
IT University of Copenhagen, Denmark
E
Eleni Nisioti
IT University of Copenhagen, Denmark
M
Marcello Barylli
IT University of Copenhagen, Denmark
M
Milton Montero
IT University of Copenhagen, Denmark
Kathrin Korte
Kathrin Korte
PhD Student, IT University of Copenhagen
NeuroevolutionMulti-Agent Reinforcement LearningBioinspired Machine Learning
Sebastian Risi
Sebastian Risi
Professor, IT University of Copenhagen
Artificial IntelligenceNeural NetworksNeuroevolutionArtificial Life