Long-term Frame-Event Visual Tracking: Benchmark Dataset and Baseline

📅 2024-03-09
🏛️ arXiv.org
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
Existing event-stream trackers are predominantly evaluated on short-term datasets, failing to reflect real-world long-term tracking requirements. To address this, we introduce FELT—the first large-scale, long-term frame-event single-object tracking benchmark—comprising 742 videos and 1.59 million frame-event pairs. We further propose the Association Memory Transformer (AMT), the first architecture to integrate modern Hopfield layers into a Transformer backbone, explicitly modeling long-range temporal dependencies across frames and enabling robust, synergistic fusion of RGB and sparse event streams. AMT achieves state-of-the-art performance on FELT, RGBT234, LasHeR, and DepthTrack, outperforming 15 baseline methods. Both the AMT code and the FELT dataset are publicly released, establishing a new benchmark and methodological paradigm for long-term heterogeneous visual tracking.

Technology Category

Application Category

📝 Abstract
Current event-/frame-event based trackers undergo evaluation on short-term tracking datasets, however, the tracking of real-world scenarios involves long-term tracking, and the performance of existing tracking algorithms in these scenarios remains unclear. In this paper, we first propose a new long-term and large-scale frame-event single object tracking dataset, termed FELT. It contains 742 videos and 1,594,474 RGB frames and event stream pairs and has become the largest frame-event tracking dataset to date. We re-train and evaluate 15 baseline trackers on our dataset for future works to compare. More importantly, we find that the RGB frames and event streams are naturally incomplete due to the influence of challenging factors and spatially sparse event flow. In response to this, we propose a novel associative memory Transformer network as a unified backbone by introducing modern Hopfield layers into multi-head self-attention blocks to fuse both RGB and event data. Extensive experiments on RGB-Event (FELT), RGB-Thermal (RGBT234, LasHeR), and RGB-Depth (DepthTrack) datasets fully validated the effectiveness of our model. The dataset and source code can be found at url{https://github.com/Event-AHU/FELT_SOT_Benchmark}.
Problem

Research questions and friction points this paper is trying to address.

Evaluating long-term tracking performance in real-world scenarios
Creating a large-scale dataset for frame-event visual tracking
Developing an associative memory-based tracker for appearance variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Associative Memory Transformer for RGB-Event tracking
Dynamic template update via associative memory
Large-scale FELT dataset with 1.9M frame-event pairs
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
School of Computer Science and Technology, Anhui University, Hefei, China
J
Ju Huang
School of Computer Science and Technology, Anhui University, Hefei, China
Shiao Wang
Shiao Wang
安徽大学
Deep Learning
Chuanming Tang
Chuanming Tang
University of Chinese Academy of Sciences | Computer Vision Center, UAB
computer versionobject tracking
B
Bowei Jiang
School of Computer Science and Technology, Anhui University, Hefei, China
Y
Yonghong Tian
Peng Cheng Laboratory, Shenzhen, China; National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, China; School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
Jin Tang
Jin Tang
Anhui University
Computer visionintelligent video analysis
B
Bin Luo
School of Computer Science and Technology, Anhui University, Hefei, China