🤖 AI Summary
Existing continual learning agents in embedded environments suffer from limited adaptability due to conventional capacity constraints, failing to capture the fundamental constraint of computational embeddability.
Method: We propose a new paradigm grounded in *computational embeddability*: modeling agents as automata simulated within their environment and formalizing the “world is larger” assumption. Unlike artificial capacity limits, we prove that embedded automata are equivalent to infinite-state POMDPs and define *interactivity*—a novel objective function quantifying continual predictive learning and dynamic adaptation. Our approach integrates automata theory, model-based reinforcement learning, and computability analysis.
Contribution/Results: Through controlled experiments with deep linear and nonlinear networks, we demonstrate that interactivity monotonically increases with linear network capacity, whereas nonlinear networks exhibit interactivity collapse. This work establishes—for the first time—the decisive impact of architectural choice on continual adaptability, providing both theoretical foundations and a new evaluation metric for embedded continual learning.
📝 Abstract
Continual learning is often motivated by the idea, known as the big world hypothesis, that "the world is bigger" than the agent. Recent problem formulations capture this idea by explicitly constraining an agent relative to the environment. These constraints lead to solutions in which the agent continually adapts to best use its limited capacity, rather than converging to a fixed solution. However, explicit constraints can be ad hoc, difficult to incorporate, and may limit the effectiveness of scaling up the agent's capacity. In this paper, we characterize a problem setting in which an agent, regardless of its capacity, is constrained by being embedded in the environment. In particular, we introduce a computationally-embedded perspective that represents an embedded agent as an automaton simulated within a universal (formal) computer. Such an automaton is always constrained; we prove that it is equivalent to an agent that interacts with a partially observable Markov decision process over a countably infinite state-space. We propose an objective for this setting, which we call interactivity, that measures an agent's ability to continually adapt its behaviour by learning new predictions. We then develop a model-based reinforcement learning algorithm for interactivity-seeking, and use it to construct a synthetic problem to evaluate continual learning capability. Our results show that deep nonlinear networks struggle to sustain interactivity, whereas deep linear networks sustain higher interactivity as capacity increases.