🤖 AI Summary
In affective computing, affective priming effects introduce implicit bias in physiological signal data, leading to model misclassification—a problem long unaddressed at the data level. This work proposes the first data-driven framework for detecting affective priming, introducing the Affective Priming Score (APS) to automatically quantify the degree of priming influence on each sample within a sequence and enabling targeted removal of high-bias samples. Leveraging the SEED and SEED-VII datasets, APS is estimated via sequence modeling and a learnable scoring mechanism. Empirical evaluation under identical model configurations shows that training on APS-filtered data significantly reduces misclassification rates—yielding an average accuracy improvement of ~8.2%—and enhances model robustness and generalization. To our knowledge, this is the first study to render affective priming in physiological signals both measurable and intervenable, thereby bridging a critical gap in bias-aware affective computing research.
📝 Abstract
Affective priming exemplifies the challenge of ambiguity in affective computing. While the community has largely addressed this issue from a label-based perspective, identifying data points in the sequence affected by the priming effect, the impact of priming on data itself, particularly in physiological signals, remains underexplored. Data affected by priming can lead to misclassifications when used in learning models. This study proposes the Affective Priming Score (APS), a data-driven method to detect data points influenced by the priming effect. The APS assigns a score to each data point, quantifying the extent to which it is affected by priming. To validate this method, we apply it to the SEED and SEED-VII datasets, which contain sufficient transitions between emotional events to exhibit priming effects. We train models with the same configuration using both the original data and priming-free sequences. The misclassification rate is significantly reduced when using priming-free sequences compared to the original data. This work contributes to the broader challenge of ambiguity by identifying and mitigating priming effects at the data level, enhancing model robustness, and offering valuable insights for the design and collection of affective computing datasets.