FGGM: Fisher-Guided Gradient Masking for Continual Learning

πŸ“… 2026-01-26
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses catastrophic forgetting in large language models during continual learning by proposing a parameter importance estimation mechanism based on diagonal Fisher information, which replaces conventional magnitude-based approaches. The method dynamically generates binary masks with adaptive thresholds to selectively update critical parameters, thereby balancing model stability and plasticity without requiring historical data replay. Experimental results demonstrate that the approach improves performance by 9.6% over supervised fine-tuning and outperforms the MIGU method by 4.4% on the TRACE benchmark. Furthermore, it significantly mitigates forgetting in code generation tasks while preserving the model’s general-purpose capabilities.

Technology Category

Application Category

πŸ“ Abstract
Catastrophic forgetting impairs the continuous learning of large language models. We propose Fisher-Guided Gradient Masking (FGGM), a framework that mitigates this by strategically selecting parameters for updates using diagonal Fisher Information. FGGM dynamically generates binary masks with adaptive thresholds, preserving critical parameters to balance stability and plasticity without requiring historical data. Unlike magnitude-based methods such as MIGU, our approach offers a mathematically principled parameter importance estimation. On the TRACE benchmark, FGGM shows a 9.6% relative improvement in retaining general capabilities over supervised fine-tuning (SFT) and a 4.4% improvement over MIGU on TRACE tasks. Additional analysis on code generation tasks confirms FGGM's superior performance and reduced forgetting, establishing it as an effective solution.
Problem

Research questions and friction points this paper is trying to address.

catastrophic forgetting
continual learning
large language models
parameter stability
learning plasticity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher Information
Gradient Masking
Continual Learning
Catastrophic Forgetting
Parameter Importance