🤖 AI Summary
Catastrophic forgetting (CF) in large language models (LLMs) during continual instruction tuning is not uniform but exhibits strong task- and architecture-dependent patterns. Method: We propose a model-specific analytical framework based on function vectors (FVs), grounded in theoretically motivated activation bias modeling. We show that CF primarily stems from shifts in function activation distributions—not functional overwrite, as conventionally assumed—and introduce FVs as learnable, model-dependent forgetting indicators. We further design an FV-stability regularization training paradigm to mitigate such shifts. Contribution/Results: Our approach significantly alleviates forgetting across four standard continual instruction-tuning benchmarks, empirically validating the function-dynamics theory. The implementation will be open-sourced.
📝 Abstract
Catastrophic forgetting (CF) poses a significant challenge in machine learning, where a model forgets previously learned information upon learning new tasks. Despite the advanced capabilities of Large Language Models (LLMs), they continue to face challenges with CF during continual learning. The majority of existing research focuses on analyzing forgetting patterns through a singular training sequence, thereby overlooking the intricate effects that diverse tasks have on model behavior. Our study explores CF across various settings, discovering that model forgetting is influenced by both the specific training tasks and the models themselves. To this end, we interpret forgetting by examining the function vector (FV), a compact representation of functions in LLMs, offering a model-dependent indicator for the occurrence of CF. Through theoretical and empirical analyses, we demonstrated that CF in LLMs primarily stems from biases in function activation rather than the overwriting of task processing functions. Leveraging these insights, we propose a novel function vector guided training methodology, incorporating a regularization technique to stabilize the FV and mitigate forgetting. Empirical tests on four benchmarks confirm the effectiveness of our proposed training method, substantiating our theoretical framework concerning CF and model function dynamics. We plan to make our code publicly accessible in the near future.