Towards Realistic Class-Incremental Learning with Free-Flow Increments

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing class-incremental learning methods, which rely on predefined task partitions with equal numbers of classes and thus struggle in real-world scenarios where new categories arrive dynamically and irregularly. To tackle this challenge, the paper formally introduces the Free-Flow Class-Incremental Learning (FFCIL) setting for the first time and presents a model-agnostic framework that mitigates statistical instability caused by small-batch introductions of new classes. The proposed approach integrates multiple mechanisms—including class-mean supervision, knowledge distillation, contrastive learning, loss normalization, and dynamic intervention with weight alignment—to effectively stabilize learning. Extensive experiments demonstrate that the method significantly outperforms mainstream baselines under the FFCIL setting, substantially alleviating catastrophic forgetting and enabling stable, efficient continual learning.
📝 Abstract
Class-incremental learning (CIL) is typically evaluated under predefined schedules with equal-sized tasks, leaving more realistic and complex cases unexplored. However, a practical CIL system should learns immediately when any number of new classes arrive, without forcing fixed-size tasks. We formalize this setting as Free-Flow Class-Incremental Learning (FFCIL), where data arrives as a more realistic stream with a highly variable number of unseen classes each step. It will make many existing CIL methods brittle and lead to clear performance degradation. We propose a model-agnostic framework for robust CIL learning under free-flow arrivals. It comprises a class-wise mean (CWM) objective that replaces sample frequency weighted loss with uniformly aggregated class-conditional supervision, thereby stabilizing the learning signal across free-flow class increments, as well as method-wise adjustments that improve robustness for representative CIL paradigms. Specifically, we constrain distillation to replayed data, normalize the scale of contrastive and knowledge transfer losses, and introduce Dynamic Intervention Weight Alignment (DIWA) to prevent over-adjustment caused by unstable statistics from small class increments. Experiments confirm a clear performance degradation across various CIL baselines under FFCIL, while our strategies yield consistent gains.
Problem

Research questions and friction points this paper is trying to address.

Class-Incremental Learning
Free-Flow Increments
Realistic Learning Scenarios
Variable Class Arrival
Continual Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Free-Flow Class-Incremental Learning
Class-Wise Mean
Dynamic Intervention Weight Alignment
Model-Agnostic Framework
Unbalanced Incremental Streams
🔎 Similar Papers
No similar papers found.