Mamba-OTR: a Mamba-based Solution for Online Take and Release Detection from Untrimmed Egocentric Video

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the online detection of object take-and-put (OTR) actions in untrimmed first-person videos, tackling challenges including severe label imbalance, temporal sparsity of positive instances, the need for precise temporal localization, and strict computational efficiency constraints. We propose a lightweight temporal modeling approach based on the Mamba architecture, integrating temporal recurrence, sliding-window processing, and streaming inference. To enhance training robustness and inference accuracy under sparse annotations, we introduce focal loss and alignment-aware regularization. On EPIC-KITCHENS-100, our method achieves 45.48 mp-mAP (sliding-window) and 43.35 mp-mAP (streaming), significantly outperforming both Transformer-based and standard Mamba baselines. This is the first work to demonstrate the effectiveness and deployment advantages of state-space models for real-time OTR detection.

Technology Category

Application Category

📝 Abstract
This work tackles the problem of Online detection of Take and Release (OTR) of an object in untrimmed egocentric videos. This task is challenging due to severe label imbalance, with temporally sparse positive annotations, and the need for precise temporal predictions. Furthermore, methods need to be computationally efficient in order to be deployed in real-world online settings. To address these challenges, we propose Mamba-OTR, a model based on the Mamba architecture. Mamba-OTR is designed to exploit temporal recurrence during inference while being trained on short video clips. To address label imbalance, our training pipeline incorporates the focal loss and a novel regularization scheme that aligns model predictions with the evaluation metric. Extensive experiments on EPIC-KITCHENS-100, the comparisons with transformer-based approach, and the evaluation of different training and test schemes demonstrate the superiority of Mamba-OTR in both accuracy and efficiency. These finding are particularly evident when evaluating full-length videos or high frame-rate sequences, even when trained on short video snippets for computational convenience. The proposed Mamba-OTR achieves a noteworthy mp-mAP of 45.48 when operating in a sliding-window fashion, and 43.35 in streaming mode, versus the 20.32 of a vanilla transformer and 25.16 of a vanilla Mamba, thus providing a strong baseline for OTR. We will publicly release the source code of Mamba-OTR to support future research.
Problem

Research questions and friction points this paper is trying to address.

Online detection of object Take and Release in egocentric videos
Addressing severe label imbalance and sparse positive annotations
Ensuring computational efficiency for real-world online deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mamba-based architecture for efficient online detection
Focal loss and novel regularization for label imbalance
Temporal recurrence during inference for accuracy
🔎 Similar Papers
No similar papers found.
A
Alessandro Sebastiano Catinello
Department of Mathematics and Computer Science - University of Catania, Italy
Giovanni Maria Farinella
Giovanni Maria Farinella
University of Catania
Computer VisionMachine Learning
Antonino Furnari
Antonino Furnari
Assistant Professor at the University of Catania
Computer Vision