Agent-centric learning: from external reward maximization to internal knowledge curation

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional general intelligence research overemphasizes external reward maximization, resulting in agents with limited adaptability and poor generalization. This paper proposes an agent-centric learning paradigm that shifts the learning objective from environmental control to the controllable construction and dynamic diversification of internal knowledge representations. Its core contribution is the formalization—first in the literature—of “representation empowerment”: the agent’s capacity to actively shape its own knowledge representations, thereby transcending dependence on extrinsic feedback. Methodologically, the framework integrates intrinsic-motivation-driven reinforcement learning with representation learning, introducing a joint metric of controllability and diversity to support self-organized knowledge development. Experiments demonstrate substantial improvements in cross-task generalization and environmental adaptability. The approach provides a novel, interpretable, and controllable pathway for designing general intelligence systems.

Technology Category

Application Category

📝 Abstract
The pursuit of general intelligence has traditionally centered on external objectives: an agent's control over its environments or mastery of specific tasks. This external focus, however, can produce specialized agents that lack adaptability. We propose representational empowerment, a new perspective towards a truly agent-centric learning paradigm by moving the locus of control inward. This objective measures an agent's ability to controllably maintain and diversify its own knowledge structures. We posit that the capacity -- to shape one's own understanding -- is an element for achieving better ``preparedness'' distinct from direct environmental influence. Focusing on internal representations as the main substrate for computing empowerment offers a new lens through which to design adaptable intelligent systems.
Problem

Research questions and friction points this paper is trying to address.

Shifting focus from external rewards to internal knowledge control
Enhancing agent adaptability through representational empowerment
Designing intelligent systems with self-curated knowledge structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-centric learning shifts focus inward
Representational empowerment diversifies knowledge structures
Internal representations enhance adaptability
🔎 Similar Papers
No similar papers found.
H
Hanqi Zhou
Human and Machine Cognition Lab, University of Tübingen, Germany; Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Germany
F
Fryderyk Mantiuk
Human and Machine Cognition Lab, University of Tübingen, Germany
David G. Nagy
David G. Nagy
University of Tübingen, Max Planck Institute for Biological Cybernetics
computational cognitive sciencemachine learning
Charley M. Wu
Charley M. Wu
Professor of Computational Cognitive Science, TU Darmstadt
GeneralizationExplorationCompositionalitySocial learningCompression