🤖 AI Summary
Existing autonomous agents struggle to balance long-term experiential learning with real-time, context-sensitive decision-making, often resulting in cognitive rigidity and inefficient contextual utilization. This work proposes a self-evolving multi-agent framework that enables continual cognitive updating and skill reuse without external retraining, through a closed-loop cognitive evolution mechanism, elastic memory orchestration, and a unified action space supporting tool invocation, LLM generation, and agent collaboration. By integrating structured prompt modeling, compressed interaction histories, and situational abstraction, the framework significantly improves task success rates, tool usage efficiency, and collaborative robustness across retrieval-augmented reasoning, tool-augmented benchmarks, and embodied tasks.
📝 Abstract
Autonomous agent frameworks still struggle to reconcile long-term experiential learning with real-time, context-sensitive decision-making. In practice, this gap appears as static cognition, rigid workflow dependence, and inefficient context usage, which jointly limit adaptability in open-ended and non-stationary environments. To address these limitations, we present AutoAgent, a self-evolving multi-agent framework built on three tightly coupled components: evolving cognition, on-the-fly contextual decision-making, and elastic memory orchestration. At the core of AutoAgent, each agent maintains structured prompt-level cognition over tools, self-capabilities, peer expertise, and task knowledge. During execution, this cognition is combined with live task context to select actions from a unified space that includes tool calls, LLM-based generation, and inter-agent requests. To support efficient long-horizon reasoning, an Elastic Memory Orchestrator dynamically organizes interaction history by preserving raw records, compressing redundant trajectories, and constructing reusable episodic abstractions, thereby reducing token overhead while retaining decision-critical evidence. These components are integrated through a closed-loop cognitive evolution process that aligns intended actions with observed outcomes to continuously update cognition and expand reusable skills, without external retraining. Empirical results across retrieval-augmented reasoning, tool-augmented agent benchmarks, and embodied task environments show that AutoAgent consistently improves task success, tool-use efficiency, and collaborative robustness over static and memory-augmented baselines. Overall, AutoAgent provides a unified and practical foundation for adaptive autonomous agents that must learn from experience while making reliable context-aware decisions in dynamic environments.