Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing sharpness-aware continual learning methods, which treat sharpness regularization as a monolithic signal—overlooking its internal structure and incurring high computational costs. The authors propose FLAD, a novel framework that, for the first time, decomposes sharpness perturbations into gradient-aligned and random noise components. Surprisingly, they find that retaining only the random noise component suffices to significantly enhance generalization. Building on this insight, they design a lightweight scheduling strategy that markedly reduces training overhead. Extensive experiments demonstrate that FLAD outperforms both standard and state-of-the-art sharpness-aware optimizers across diverse continual learning scenarios, achieving superior performance while enabling efficient deployment.

Technology Category

Application Category

📝 Abstract
Continual Learning (CL) aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge. Recent studies have shown that optimizing towards flatter loss minima can improve model generalization. However, existing sharpness-aware methods for CL suffer from two key limitations: (1) they treat sharpness regularization as a unified signal without distinguishing the contributions of its components. and (2) they introduce substantial computational overhead that impedes practical deployment. To address these challenges, we propose FLAD, a novel optimization framework that decomposes sharpness-aware perturbations into gradient-aligned and stochastic-noise components, and show that retaining only the noise component promotes generalization. We further introduce a lightweight scheduling scheme that enables FLAD to maintain significant performance gains even under constrained training time. FLAD can be seamlessly integrated into various CL paradigms and consistently outperforms standard and sharpness-aware optimizers in diverse experimental settings, demonstrating its effectiveness and practicality in CL.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Sharpness-aware Optimization
Computational Overhead
Flatness Decomposition
Model Generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flatness Decomposition
Continual Learning
Sharpness-Aware Optimization
Noise Component
Lightweight Scheduling
🔎 Similar Papers
No similar papers found.