RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models

๐Ÿ“… 2024-12-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses four key challenges in training aggregation-based vision foundation models: (1) difficulty in multi-resolution adaptation, (2) capability imbalance across multiple teacher models, (3) distillation artifacts, and (4) output token redundancy. Methodologically, we propose a novel multi-resolution collaborative training paradigm and a token-aware compression mechanism, integrated with mosaic augmentation, dynamically weighted teacher loss balancing, and multi-source knowledge distillation from CLIP, DINO, and SAM. Technically, our approach overcomes modality inconsistency and information density bottlenecks, enabling high-fidelity, efficient cross-scale knowledge transfer. We release four open-source model variantsโ€”B, L, H, and gโ€”and demonstrate significant improvements over baselines on zero-shot transfer and fine-grained visual understanding tasks. All pre-trained weights and inference code are publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
Agglomerative models have recently emerged as a powerful approach to training vision foundation models, leveraging multi-teacher distillation from existing models such as CLIP, DINO, and SAM. This strategy enables the efficient creation of robust models, combining the strengths of individual teachers while significantly reducing computational and resource demands. In this paper, we thoroughly analyze state-of-the-art agglomerative models, identifying critical challenges including resolution mode shifts, teacher imbalance, idiosyncratic teacher artifacts, and an excessive number of output tokens. To address these issues, we propose several novel solutions: multi-resolution training, mosaic augmentation, and improved balancing of teacher loss functions. Specifically, in the context of Vision Language Models, we introduce a token compression technique to maintain high-resolution information within a fixed token count. We release our top-performing variants at multiple scales (-B, -L, -H, and -g), along with inference code and pretrained weights
Problem

Research questions and friction points this paper is trying to address.

Address resolution mode shifts in agglomerative models
Mitigate teacher imbalance and idiosyncratic artifacts
Reduce excessive output tokens in Vision Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-resolution training
Mosaic augmentation
Token compression technique
๐Ÿ”Ž Similar Papers
No similar papers found.