🤖 AI Summary
This work addresses the online detection of object take-and-put (OTR) actions in untrimmed first-person videos, tackling challenges including severe label imbalance, temporal sparsity of positive instances, the need for precise temporal localization, and strict computational efficiency constraints. We propose a lightweight temporal modeling approach based on the Mamba architecture, integrating temporal recurrence, sliding-window processing, and streaming inference. To enhance training robustness and inference accuracy under sparse annotations, we introduce focal loss and alignment-aware regularization. On EPIC-KITCHENS-100, our method achieves 45.48 mp-mAP (sliding-window) and 43.35 mp-mAP (streaming), significantly outperforming both Transformer-based and standard Mamba baselines. This is the first work to demonstrate the effectiveness and deployment advantages of state-space models for real-time OTR detection.
📝 Abstract
This work tackles the problem of Online detection of Take and Release (OTR) of an object in untrimmed egocentric videos. This task is challenging due to severe label imbalance, with temporally sparse positive annotations, and the need for precise temporal predictions. Furthermore, methods need to be computationally efficient in order to be deployed in real-world online settings. To address these challenges, we propose Mamba-OTR, a model based on the Mamba architecture. Mamba-OTR is designed to exploit temporal recurrence during inference while being trained on short video clips. To address label imbalance, our training pipeline incorporates the focal loss and a novel regularization scheme that aligns model predictions with the evaluation metric. Extensive experiments on EPIC-KITCHENS-100, the comparisons with transformer-based approach, and the evaluation of different training and test schemes demonstrate the superiority of Mamba-OTR in both accuracy and efficiency. These finding are particularly evident when evaluating full-length videos or high frame-rate sequences, even when trained on short video snippets for computational convenience. The proposed Mamba-OTR achieves a noteworthy mp-mAP of 45.48 when operating in a sliding-window fashion, and 43.35 in streaming mode, versus the 20.32 of a vanilla transformer and 25.16 of a vanilla Mamba, thus providing a strong baseline for OTR. We will publicly release the source code of Mamba-OTR to support future research.