Hyperagents

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current self-improving AI systems are constrained by fixed, handcrafted meta-mechanisms, limiting their capacity for open-ended self-enhancement beyond code-centric domains. This work proposes the Hyperagents framework, which unifies task agents and editable meta-agents into a single modifiable program, thereby enabling, for the first time, the editability of metacognitive mechanisms themselves. This breakthrough removes the requirement that task performance and self-improvement capabilities be aligned within the same domain. Built upon an extension of the Darwinian Gödel Machine (DGM), the resulting DGM-Hyperagents (DGM-H) integrates editable metacognition, persistent memory, and performance tracking, achieving sustained acceleration in self-improvement across diverse tasks. Empirical results demonstrate significant outperformance over existing baselines and reveal emergent cross-domain transfer and cumulative gains at the meta-level.

Technology Category

Application Category

📝 Abstract
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
Problem

Research questions and friction points this paper is trying to address.

self-improving AI
open-ended improvement
meta-level modification
hyperagents
computable tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

hyperagents
self-improving AI
metacognitive self-modification
open-ended learning
editable meta-mechanism
🔎 Similar Papers
No similar papers found.