🤖 AI Summary
Traditional general intelligence research overemphasizes external reward maximization, resulting in agents with limited adaptability and poor generalization. This paper proposes an agent-centric learning paradigm that shifts the learning objective from environmental control to the controllable construction and dynamic diversification of internal knowledge representations. Its core contribution is the formalization—first in the literature—of “representation empowerment”: the agent’s capacity to actively shape its own knowledge representations, thereby transcending dependence on extrinsic feedback. Methodologically, the framework integrates intrinsic-motivation-driven reinforcement learning with representation learning, introducing a joint metric of controllability and diversity to support self-organized knowledge development. Experiments demonstrate substantial improvements in cross-task generalization and environmental adaptability. The approach provides a novel, interpretable, and controllable pathway for designing general intelligence systems.
📝 Abstract
The pursuit of general intelligence has traditionally centered on external objectives: an agent's control over its environments or mastery of specific tasks. This external focus, however, can produce specialized agents that lack adaptability. We propose representational empowerment, a new perspective towards a truly agent-centric learning paradigm by moving the locus of control inward. This objective measures an agent's ability to controllably maintain and diversify its own knowledge structures. We posit that the capacity -- to shape one's own understanding -- is an element for achieving better ``preparedness'' distinct from direct environmental influence. Focusing on internal representations as the main substrate for computing empowerment offers a new lens through which to design adaptable intelligent systems.