π€ AI Summary
This work addresses catastrophic forgetting in large language models during continual learning by proposing a parameter importance estimation mechanism based on diagonal Fisher information, which replaces conventional magnitude-based approaches. The method dynamically generates binary masks with adaptive thresholds to selectively update critical parameters, thereby balancing model stability and plasticity without requiring historical data replay. Experimental results demonstrate that the approach improves performance by 9.6% over supervised fine-tuning and outperforms the MIGU method by 4.4% on the TRACE benchmark. Furthermore, it significantly mitigates forgetting in code generation tasks while preserving the modelβs general-purpose capabilities.
π Abstract
Catastrophic forgetting impairs the continuous learning of large language models. We propose Fisher-Guided Gradient Masking (FGGM), a framework that mitigates this by strategically selecting parameters for updates using diagonal Fisher Information. FGGM dynamically generates binary masks with adaptive thresholds, preserving critical parameters to balance stability and plasticity without requiring historical data. Unlike magnitude-based methods such as MIGU, our approach offers a mathematically principled parameter importance estimation. On the TRACE benchmark, FGGM shows a 9.6% relative improvement in retaining general capabilities over supervised fine-tuning (SFT) and a 4.4% improvement over MIGU on TRACE tasks. Additional analysis on code generation tasks confirms FGGM's superior performance and reduced forgetting, establishing it as an effective solution.