🤖 AI Summary
This paper investigates continual learning optimization for agents under capacity constraints, focusing on linear-quadratic-Gaussian (LQG) sequential prediction as a canonical dynamic learning setting. To address bounded memory and computational resources, we establish a theoretical framework grounded in optimal control and dynamical systems theory—formalizing, for the first time, the steady-state structure of resource allocation across subtasks. By rigorously analyzing coupling properties of decomposable tasks, we derive the globally optimal continual learning policy under capacity limits and obtain an analytical closed-form expression for the steady-state resource allocation. Our work provides the first systematic theoretical foundation for resource-constrained continual learning and quantifies the fundamental trade-off between learning performance and resource budgets—yielding principled design guidelines for lightweight adaptive systems.
📝 Abstract
Any agents we can possibly build are subject to capacity constraints, as memory and compute resources are inherently finite. However, comparatively little attention has been dedicated to understanding how agents with limited capacity should allocate their resources for optimal performance. The goal of this paper is to shed some light on this question by studying a simple yet relevant continual learning problem: the capacity-constrained linear-quadratic-Gaussian (LQG) sequential prediction problem. We derive a solution to this problem under appropriate technical conditions. Moreover, for problems that can be decomposed into a set of sub-problems, we also demonstrate how to optimally allocate capacity across these sub-problems in the steady state. We view the results of this paper as a first step in the systematic theoretical study of learning under capacity constraints.