🤖 AI Summary
This work addresses critical challenges in replay-free class-incremental learning—namely, insufficient initial-class coverage, poor model adaptability to future tasks, and severe catastrophic forgetting—by proposing the novel paradigm of *future robustness*, which shifts the incremental learning objective from retrospective consistency to forward adaptability. Methodologically, we introduce a task-agnostic semantic anchoring mechanism coupled with future-aware regularization, integrating contrastive semantic distillation, meta-regularization constraints, and a lightweight task prototype memory bank, enabling end-to-end differentiable training on both ResNet and ViT backbones. Evaluated on CIFAR-100, ImageNet-R, and our newly constructed Future-Bench benchmark, our approach achieves a 12.3% improvement in future-task accuracy, reduces forgetting by 77.9% (to 2.1%), and outperforms state-of-the-art methods by 9.7% in cross-task generalization.