GOT-JEPA: Generic Object Tracking with Model Adaptation and Occlusion Handling using Joint-Embedding Predictive Architecture

๐Ÿ“… 2026-02-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited generalization of existing generic object trackers in unseen scenarios and their coarse modeling of occlusion, which lacks fine-grained reasoning. To overcome these limitations, we propose the GOT-JEPA framework, which introduces the Joint-Embedding Predictive Architecture (JEPA) to visual tracking for the first time. Leveraging a teacherโ€“student self-supervised mechanism, GOT-JEPA predicts pseudo-labels consistent with clean reference frames even from occluded or corrupted inputs, thereby achieving strong generalization. Furthermore, we design the OccuSolver module, which integrates point tracking with object priors to perform iterative, fine-grained visibility estimation and occlusion modeling. Extensive experiments demonstrate that our approach significantly enhances tracking robustness across seven benchmark datasets, particularly excelling in challenging occlusion and interference scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
The human visual system tracks objects by integrating current observations with previously observed information, adapting to target and scene changes, and reasoning about occlusion at fine granularity. In contrast, recent generic object trackers are often optimized for training targets, which limits robustness and generalization in unseen scenarios, and their occlusion reasoning remains coarse, lacking detailed modeling of occlusion patterns. To address these limitations in generalization and occlusion perception, we propose GOT-JEPA, a model-predictive pretraining framework that extends JEPA from predicting image features to predicting tracking models. Given identical historical information, a teacher predictor generates pseudo-tracking models from a clean current frame, and a student predictor learns to predict the same pseudo-tracking models from a corrupted version of the current frame. This design provides stable pseudo supervision and explicitly trains the predictor to produce reliable tracking models under occlusions, distractors, and other adverse observations, improving generalization to dynamic environments. Building on GOT-JEPA, we further propose OccuSolver to enhance occlusion perception for object tracking. OccuSolver adapts a point-centric point tracker for object-aware visibility estimation and detailed occlusion-pattern capture. Conditioned on object priors iteratively generated by the tracker, OccuSolver incrementally refines visibility states, strengthens occlusion handling, and produces higher-quality reference labels that progressively improve subsequent model predictions. Extensive evaluations on seven benchmarks show that our method effectively enhances tracker generalization and robustness.
Problem

Research questions and friction points this paper is trying to address.

generic object tracking
occlusion handling
model generalization
visual tracking
occlusion reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint-Embedding Predictive Architecture
Model Adaptation
Occlusion Handling
Generic Object Tracking
Self-supervised Pretraining
๐Ÿ”Ž Similar Papers
No similar papers found.