🤖 AI Summary
This work addresses the prevalent issue of capability degradation and catastrophic forgetting in large language models following task-specific fine-tuning. To mitigate this, the authors propose Activation-difference-guided Channel Targeting (ACT), a method that identifies a sparse, decoupled, and stable subset of model channels where task-specific capabilities are concentrated. By selectively transferring only these critical channel parameters, the approach enables efficient capability fusion and recovery of forgotten skills. Experimental results demonstrate that ACT effectively preserves original competencies while restoring lost abilities across multilingual mathematical and scientific reasoning tasks, and successfully consolidates multiple specialized models into a single, versatile model without significant performance trade-offs.
📝 Abstract
Large language models can be continually pre-trained or fine-tuned to improve performance in specific domains, languages, or skills, but this specialization often degrades other capabilities and may cause catastrophic forgetting. We investigate how abilities are distributed within LLM parameters by analyzing module activations under domain- and language-specific inputs for closely related models. Across layers and modules, we find that ability-related activations are highly concentrated in a small set of channels (typically<5\%), and these channels are largely disentangled with good sufficiency and stability. Building on these observations, we propose ACT (Activation-Guided Channel-wise Ability Transfer), which localizes ability-relevant channels via activation differences and selectively transfers only the corresponding parameters, followed by lightweight fine-tuning for compatibility. Experiments on multilingual mathematical and scientific reasoning show that ACT can recover forgotten abilities while preserving retained skills. It can also merge multiple specialized models to integrate several abilities into a single model with minimal interference. Our code and data will be publicly released.