UniViTAR: Unified Vision Transformer with Native Resolution

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional vision Transformers rely on fixed input resolutions, compromising spatial context fidelity and modality diversity for natural images and videos. This work introduces a native-resolution unified visual foundation model that eliminates resolution constraints and enables joint representation learning for both images and videos. Our method comprises three key innovations: (1) a novel resolution curriculum learning strategy coupled with intra-batch cross-modal (image/video) switching during training; (2) a unified Vision Transformer architecture explicitly designed for native-resolution inputs; and (3) a hybrid contrastive distillation framework integrating sigmoid-based contrastive loss with frozen-teacher-guided feature distillation. Evaluated at 0.3B–1B parameter scales and trained exclusively on public data, the model achieves high-fidelity spatiotemporal modeling and rapid convergence. It significantly enhances multi-scale and multimodal visual understanding across diverse downstream tasks.

Technology Category

Application Category

📝 Abstract
Conventional Vision Transformer simplifies visual modeling by standardizing input resolutions, often disregarding the variability of natural visual data and compromising spatial-contextual fidelity. While preliminary explorations have superficially investigated native resolution modeling, existing approaches still lack systematic analysis from a visual representation perspective. To bridge this gap, we introduce UniViTAR, a family of homogeneous vision foundation models tailored for unified visual modality and native resolution scenario in the era of multimodal. Our framework first conducts architectural upgrades to the vanilla paradigm by integrating multiple advanced components. Building upon these improvements, a progressive training paradigm is introduced, which strategically combines two core mechanisms: (1) resolution curriculum learning, transitioning from fixed-resolution pretraining to native resolution tuning, thereby leveraging ViT's inherent adaptability to variable-length sequences, and (2) visual modality adaptation via inter-batch image-video switching, which balances computational efficiency with enhanced temporal reasoning. In parallel, a hybrid training framework further synergizes sigmoid-based contrastive loss with feature distillation from a frozen teacher model, thereby accelerating early-stage convergence. Finally, trained exclusively on public datasets, externsive experiments across multiple model scales from 0.3B to 1B demonstrate its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Addresses variability in natural visual data resolution
Improves spatial-contextual fidelity in Vision Transformers
Unifies visual modality and native resolution modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Vision Transformer for native resolution
Progressive training with resolution curriculum
Hybrid training with contrastive loss and distillation
🔎 Similar Papers
No similar papers found.