PlanarTrack: A high-quality and challenging benchmark for large-scale planar object tracking

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Plane tracking has long suffered from the absence of large-scale, high-quality, and challenging benchmark datasets. To address this, we introduce the largest and most diverse planar object tracking benchmark to date, comprising 1,150 real-world complex-scene video sequences (733K frames), each containing a single unique planar target and supporting both short-term and long-term tracking. Annotations—performed manually via precise four-corner point labeling and rigorously validated through multi-round cross-checking—achieve high geometric accuracy. We conduct the first systematic evaluation of ten state-of-the-art trackers on this benchmark, revealing significant performance degradation under realistic challenges including scale variation, occlusion, and motion blur. Our benchmark thus provides a rigorous, reproducible foundation for algorithmic evaluation, failure mode analysis, and methodological innovation in planar tracking.

Technology Category

Application Category

📝 Abstract
Planar tracking has drawn increasing interest owing to its key roles in robotics and augmented reality. Despite recent great advancement, further development of planar tracking, particularly in the deep learning era, is largely limited compared to generic tracking due to the lack of large-scale platforms. To mitigate this, we propose PlanarTrack, a large-scale high-quality and challenging benchmark for planar tracking. Specifically, PlanarTrack consists of 1,150 sequences with over 733K frames, including 1,000 short-term and 150 new long-term videos, which enables comprehensive evaluation of short- and long-term tracking performance. All videos in PlanarTrack are recorded in unconstrained conditions from the wild, which makes PlanarTrack challenging but more realistic for real-world applications. To ensure high-quality annotations, each video frame is manually annotated by four corner points with multi-round meticulous inspection and refinement. To enhance target diversity of PlanarTrack, we only capture a unique target in one sequence, which is different from existing benchmarks. To our best knowledge, PlanarTrack is by far the largest and most diverse and challenging dataset dedicated to planar tracking. To understand performance of existing methods on PlanarTrack and to provide a comparison for future research, we evaluate 10 representative planar trackers with extensive comparison and in-depth analysis. Our evaluation reveals that, unsurprisingly, the top planar trackers heavily degrade on the challenging PlanarTrack, which indicates more efforts are required for improving planar tracking. Our data and results will be released at https://github.com/HengLan/PlanarTrack
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of large-scale benchmark for planar object tracking
Provides high-quality annotated dataset for realistic tracking evaluation
Evaluates performance degradation of existing trackers on challenging scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale benchmark with 1150 video sequences
Manual corner point annotations with quality control
Evaluation of 10 trackers for performance comparison
🔎 Similar Papers
No similar papers found.
Y
Yifan Jiao
Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Xinran Liu
Xinran Liu
Ph.D. candidate, Vanderbilt University
optimal transportmachine learning
X
Xiaoqiong Liu
Dept. of Computer Science & Engineering, University of North Texas, Denton, United States of America
X
Xiaohui Yuan
Dept. of Computer Science & Engineering, University of North Texas, Denton, United States of America
Heng Fan
Heng Fan
Assistant Professor, University of North Texas
Computer VisionMachine LearningArtificial Intelligence
L
Libo Zhang
Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China