🤖 AI Summary
Brain signals degrade over time and exhibit cross-session representational shifts, leading to performance degradation and accumulated bias in visual–brain decoding models. To address these challenges, we propose the Bias-Mitigation Continual Learning (BRAIN) framework. BRAIN introduces three key innovations: (1) a debiasing contrastive learning loss that explicitly disentangles subject-specific biases from semantic features; (2) an angular-based forgetting mitigation mechanism that constrains directional stability of feature representations during parameter updates; and (3) a synergistic integration of continual learning with dynamic modeling of neural signal evolution, enabling robust representation alignment across unlabeled sessions. Evaluated on multiple public EEG benchmarks, BRAIN consistently outperforms state-of-the-art methods, achieving up to 12.7% improvement in cross-session classification accuracy. It effectively mitigates catastrophic forgetting and bias accumulation, significantly enhancing model stability for long-term deployment in real-world brain–computer interface applications.
📝 Abstract
Memory decay makes it harder for the human brain to recognize visual objects and retain details. Consequently, recorded brain signals become weaker, uncertain, and contain poor visual context over time. This paper presents one of the first vision-learning approaches to address this problem. First, we statistically and experimentally demonstrate the existence of inconsistency in brain signals and its impact on the Vision-Brain Understanding (VBU) model. Our findings show that brain signal representations shift over recording sessions, leading to compounding bias, which poses challenges for model learning and degrades performance. Then, we propose a new Bias-Mitigation Continual Learning (BRAIN) approach to address these limitations. In this approach, the model is trained in a continual learning setup and mitigates the growing bias from each learning step. A new loss function named De-bias Contrastive Learning is also introduced to address the bias problem. In addition, to prevent catastrophic forgetting, where the model loses knowledge from previous sessions, the new Angular-based Forgetting Mitigation approach is introduced to preserve learned knowledge in the model. Finally, the empirical experiments demonstrate that our approach achieves State-of-the-Art (SOTA) performance across various benchmarks, surpassing prior and non-continual learning methods.