๐ค AI Summary
This work studies achievable continual linear regression under random task sequences, aiming to close the significant gap between the theoretical lower bound ฮฉ(1/k) and the previous best upper bound O(1/k^{1/4}) for unregularized methods. We establish, for the first time, that optimal O(1/k) convergence is attainable via either explicit โโ regularization or implicit incremental regularization schedulingโsuch as progressively reducing the number of SGD steps per task. Our analysis integrates time-varying optimization theory, proxy loss construction, and asymptotically increasing regularization strength, yielding a near-optimal rate of O(log k / k). Theoretically, we show that moderately strengthening regularization or shortening per-task training duration effectively mitigates catastrophic forgetting. This work fills a fundamental theoretical gap by establishing the optimal convergence rate for continual linear regression.
๐ Abstract
We study realizable continual linear regression under random task orderings, a common setting for developing continual learning theory. In this setup, the worst-case expected loss after $k$ learning iterations admits a lower bound of $Omega(1/k)$. However, prior work using an unregularized scheme has only established an upper bound of $O(1/k^{1/4})$, leaving a significant gap. Our paper proves that this gap can be narrowed, or even closed, using two frequently used regularization schemes: (1) explicit isotropic $ell_2$ regularization, and (2) implicit regularization via finite step budgets. We show that these approaches, which are used in practice to mitigate forgetting, reduce to stochastic gradient descent (SGD) on carefully defined surrogate losses. Through this lens, we identify a fixed regularization strength that yields a near-optimal rate of $O(log k / k)$. Moreover, formalizing and analyzing a generalized variant of SGD for time-varying functions, we derive an increasing regularization strength schedule that provably achieves an optimal rate of $O(1/k)$. This suggests that schedules that increase the regularization coefficient or decrease the number of steps per task are beneficial, at least in the worst case.