🤖 AI Summary
In continual learning, cross-task feature entanglement causes catastrophic forgetting and degraded discriminability. To address this, we propose a hyperspherical feature realignment framework that—novelty—couples a simplex equiangular tight frame (ETF) classifier with a dynamic gating mechanism for Mixture of Experts (MoE), enabling task-adaptive subspace projection. This design preserves structured representations while enhancing continual plasticity. Our method integrates hyperspherical normalization, the ETF classifier, MoE gating, and replay-based regularization. Evaluated on 11 benchmark datasets, it achieves an average accuracy improvement of 2%, with particularly pronounced gains on fine-grained recognition tasks. These results empirically validate the effectiveness of feature disentanglement and dynamic geometric alignment in mitigating interference across sequential tasks.
📝 Abstract
Continual Learning enables models to learn and adapt to new tasks while retaining prior knowledge.Introducing new tasks, however, can naturally lead to feature entanglement across tasks, limiting the model's capability to distinguish between new domain data.In this work, we propose a method called Feature Realignment through Experts on hyperSpHere in Continual Learning (Fresh-CL). By leveraging predefined and fixed simplex equiangular tight frame (ETF) classifiers on a hypersphere, our model improves feature separation both intra and inter tasks.However, the projection to a simplex ETF shifts with new tasks, disrupting structured feature representation of previous tasks and degrading performance. Therefore, we propose a dynamic extension of ETF through mixture of experts, enabling adaptive projections onto diverse subspaces to enhance feature representation.Experiments on 11 datasets demonstrate a 2% improvement in accuracy compared to the strongest baseline, particularly in fine-grained datasets, confirming the efficacy of combining ETF and MoE to improve feature distinction in continual learning scenarios.