🤖 AI Summary
This work addresses the high cognitive load, time cost, and error-proneness of traditional end-to-end self-annotation in affective computing. We propose a low-overhead retrospective self-annotation method that, for the first time, integrates preference learning with the peak-end rule to identify critical regions of emotional change through ordinal emotion representations. Users annotate only selected segments, while the remaining portions are automatically inferred via interpolation modeling, supported by a context preview mechanism to enhance annotation confidence. In a user study with 25 participants, our approach significantly reduced annotation burden and effectively captured emotional turning points, outperforming existing baselines without compromising annotation quality.
📝 Abstract
Self-annotation is the gold standard for collecting affective state labels in affective computing. Existing methods typically rely on full annotation, requiring users to continuously label affective states across entire sessions. While this process yields fine-grained data, it is time-consuming, cognitively demanding, and prone to fatigue and errors. To address these issues, we present PREFAB, a low-budget retrospective self-annotation method that targets affective inflection regions rather than full annotation. Grounded in the peak-end rule and ordinal representations of emotion, PREFAB employs a preference-learning model to detect relative affective changes, directing annotators to label only selected segments while interpolating the remainder of the stimulus. We further introduce a preview mechanism that provides brief contextual cues to assist annotation. We evaluate PREFAB through a technical performance study and a 25-participant user study. Results show that PREFAB outperforms baselines in modeling affective inflections while mitigating workload (and conditionally mitigating temporal burden). Importantly PREFAB improves annotator confidence without degrading annotation quality.