🤖 AI Summary
This work identifies and systematically validates the “ordinal bias” problem in instructional video action recognition: mainstream models excessively rely on inherent action temporal ordering in training data, leading to severe generalization degradation under non-canonical action sequences. To address this, we propose two novel video perturbation techniques—frame-level co-occurring action masking and segment-level action order randomization—and introduce a cross-model bias diagnostic framework. This establishes, for the first time, an evaluation paradigm explicitly targeting temporal robustness in action recognition. Experiments show that state-of-the-art models suffer an average accuracy drop of 32.7% on perturbed videos, confirming their heavy dependence on temporal priors. Our work provides a reproducible diagnostic toolkit and actionable pathways toward models with genuine semantic understanding—not mere temporal pattern memorization—thereby advancing robust, generalizable action recognition.
📝 Abstract
Action recognition models have achieved promising results in understanding instructional videos. However, they often rely on dominant, dataset-specific action sequences rather than true video comprehension, a problem that we define as ordinal bias. To address this issue, we propose two effective video manipulation methods: Action Masking, which masks frames of frequently co-occurring actions, and Sequence Shuffling, which randomizes the order of action segments. Through comprehensive experiments, we demonstrate that current models exhibit significant performance drops when confronted with nonstandard action sequences, underscoring their vulnerability to ordinal bias. Our findings emphasize the importance of rethinking evaluation strategies and developing models capable of generalizing beyond fixed action patterns in diverse instructional videos.