🤖 AI Summary
Existing state-space model (SSM)-based video action recognition methods suffer from poor spatiotemporal resolution generalization, exhibiting significant performance degradation on unseen video scales. To address this, we propose StretchySnake—the first systematic framework that leverages the intrinsic adaptability of SSMs through flexible training: it enables adaptive modeling of videos of arbitrary length and resolution via dynamic weight interpolation and multi-scale spatiotemporal sampling. We design and empirically validate five flexible training variants, unifying short-clip and long-video representation learning. On standard benchmarks—including UCF-101, HMDB-51, COIN, and Breakfast—StretchySnake outperforms both Transformer- and SSM-based baselines by up to 28%. It further demonstrates superior fine-grained recognition capability on Something-Something V2 and Diving-48.
📝 Abstract
State space models (SSMs) have emerged as a competitive alternative to transformers in various tasks. Their linear complexity and hidden-state recurrence make them particularly attractive for modeling long sequences, whereas attention becomes quadratically expensive. However, current training methods for video understanding are tailored towards transformers and fail to fully leverage the unique attributes of SSMs. For example, video models are often trained at a fixed resolution and video length to balance the quadratic scaling of attention cost against performance. Consequently, these models suffer from degraded performance when evaluated on videos with spatial and temporal resolutions unseen during training; a property we call spatio-temporal inflexibility. In the context of action recognition, this severely limits a model's ability to retain performance across both short- and long-form videos. Therefore, we propose a flexible training method that leverages and improves the inherent adaptability of SSMs. Our method samples videos at varying temporal and spatial resolutions during training and dynamically interpolates model weights to accommodate any spatio-temporal scale. This instills our SSM, which we call StretchySnake, with spatio-temporal flexibility and enables it to seamlessly handle videos ranging from short, fine-grained clips to long, complex activities. We introduce and compare five different variants of flexible training, and identify the most effective strategy for video SSMs. On short-action (UCF-101, HMDB-51) and long-action (COIN, Breakfast) benchmarks, StretchySnake outperforms transformer and SSM baselines alike by up to 28%, with strong adaptability to fine-grained actions (SSV2, Diving-48). Therefore, our method provides a simple drop-in training recipe that makes video SSMs more robust, resolution-agnostic, and efficient across diverse action recognition scenarios.