🤖 AI Summary
This work investigates whether large language models (LLMs) can achieve human-like continual learning via in-context learning (ICL)—specifically, retaining prior knowledge over extended multi-task sequences while accumulating new knowledge across tasks. To this end, we propose *Contextual Continual Learning* (CCL), a framework integrating task scheduling, prompt reordering, and distributed practice mechanisms, grounded in human memory similarity metrics and computational modeling of the spacing effect to mitigate catastrophic forgetting. Experiments on a Markov-chain-based multi-task benchmark demonstrate that linear-attention models (e.g., Mamba, RWKV) exhibit memory dynamics closely aligned with human behavioral patterns, including a distinct “spacing-effect sweet spot.” Crucially, CCL achieves an effective stability–plasticity trade-off without parameter updates. This is the first empirical evidence confirming that ICL—when augmented with cognitively inspired mechanisms—can support human-like continual learning.
📝 Abstract
Large language models (LLMs) can adapt to new tasks via in-context learning (ICL) without parameter updates, making them powerful learning engines for fast adaptation. While extensive research has examined ICL as a few-shot learner, whether it can achieve long-term retention and cross-task knowledge accumulation when multitasks arrive sequentially remains underexplored. Motivated by human memory studies, we investigate the retention characteristics of ICL in multitask settings and extend it to in-context continual learning (ICCL), where continual learning ability emerges through task scheduling and prompt rearrangement. Experiments on Markov-Chain benchmarks demonstrate that, for specific large-language models, ICCL benefits from distributed practice (DP) in a manner analogous to humans, consistently revealing a spacing "sweet spot" for retention. Beyond retention performance, we propose a human-retention similarity metric to quantify how closely a continual-learning (CL) method aligns with human retention dynamics. Using this metric, we show that linear-attention models such as MAMBA and RWKV exhibit particularly human-like retention patterns, despite their retention performance lagging behind that of Transformer-based LLMs. Overall, our results establish ICCL as both cognitively plausible and practically effective, providing an inference-only CL paradigm that mitigates catastrophic forgetting and addresses the stability-plasticity dilemma in conventional CL methods.