Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Catastrophic forgetting (CF) in large language models (LLMs) during continual instruction tuning is not uniform but exhibits strong task- and architecture-dependent patterns. Method: We propose a model-specific analytical framework based on function vectors (FVs), grounded in theoretically motivated activation bias modeling. We show that CF primarily stems from shifts in function activation distributions—not functional overwrite, as conventionally assumed—and introduce FVs as learnable, model-dependent forgetting indicators. We further design an FV-stability regularization training paradigm to mitigate such shifts. Contribution/Results: Our approach significantly alleviates forgetting across four standard continual instruction-tuning benchmarks, empirically validating the function-dynamics theory. The implementation will be open-sourced.

Technology Category

Application Category

📝 Abstract
Catastrophic forgetting (CF) poses a significant challenge in machine learning, where a model forgets previously learned information upon learning new tasks. Despite the advanced capabilities of Large Language Models (LLMs), they continue to face challenges with CF during continual learning. The majority of existing research focuses on analyzing forgetting patterns through a singular training sequence, thereby overlooking the intricate effects that diverse tasks have on model behavior. Our study explores CF across various settings, discovering that model forgetting is influenced by both the specific training tasks and the models themselves. To this end, we interpret forgetting by examining the function vector (FV), a compact representation of functions in LLMs, offering a model-dependent indicator for the occurrence of CF. Through theoretical and empirical analyses, we demonstrated that CF in LLMs primarily stems from biases in function activation rather than the overwriting of task processing functions. Leveraging these insights, we propose a novel function vector guided training methodology, incorporating a regularization technique to stabilize the FV and mitigate forgetting. Empirical tests on four benchmarks confirm the effectiveness of our proposed training method, substantiating our theoretical framework concerning CF and model function dynamics. We plan to make our code publicly accessible in the near future.
Problem

Research questions and friction points this paper is trying to address.

Catastrophic forgetting in continual learning
Function vector interpretation in LLMs
Regularization technique to mitigate forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Function vector guided training
Regularization stabilizes function vectors
Mitigates catastrophic forgetting effectively
🔎 Similar Papers
No similar papers found.
Gangwei Jiang
Gangwei Jiang
中国科学技术大学
machine learning
C
Caigao Jiang
Independent
Z
Zhaoyi Li
University of Science and Technology of China, City University of Hongkong
Siqiao Xue
Siqiao Xue
Ant Group, Alibaba
Machine learning
J
Jun Zhou
Independent
Linqi Song
Linqi Song
Associate Professor, Department of Computer Science, City University of Hong Kong
Information TheoryFederated LearningNatural Language Processing
D
Defu Lian
University of Science and Technology of China
Y
Yin Wei
Zhejiang University