🤖 AI Summary
To address the challenge of balancing model compactness and detection accuracy in real-time open-vocabulary object detection, this paper proposes Dynamic-DINO: a dynamic Mixture-of-Experts (MoE) inference framework tailored for lightweight vision-language models. Unlike static expert assignment, Dynamic-DINO identifies and models fixed collaborative patterns among deep-layer experts, and introduces an input-aware sparse subnetwork activation mechanism. It further incorporates MoE-Tuning, fine-grained FFN splitting, and router initialization guided by pretrained weights to expand the learnable parameter space while preserving model compactness—achieving significantly fewer parameters than baseline models. Trained solely on 1.56M publicly available images, Dynamic-DINO surpasses Grounding DINO 1.5 Edge—trained on the private Grounding20M dataset—on open-vocabulary detection, delivering superior accuracy–latency trade-offs and real-time performance.
📝 Abstract
The Mixture of Experts (MoE) architecture has excelled in Large Vision-Language Models (LVLMs), yet its potential in real-time open-vocabulary object detectors, which also leverage large-scale vision-language datasets but smaller models, remains unexplored. This work investigates this domain, revealing intriguing insights. In the shallow layers, experts tend to cooperate with diverse peers to expand the search space. While in the deeper layers, fixed collaborative structures emerge, where each expert maintains 2-3 fixed partners and distinct expert combinations are specialized in processing specific patterns. Concretely, we propose Dynamic-DINO, which extends Grounding DINO 1.5 Edge from a dense model to a dynamic inference framework via an efficient MoE-Tuning strategy. Additionally, we design a granularity decomposition mechanism to decompose the Feed-Forward Network (FFN) of base model into multiple smaller expert networks, expanding the subnet search space. To prevent performance degradation at the start of fine-tuning, we further propose a pre-trained weight allocation strategy for the experts, coupled with a specific router initialization. During inference, only the input-relevant experts are activated to form a compact subnet. Experiments show that, pretrained with merely 1.56M open-source data, Dynamic-DINO outperforms Grounding DINO 1.5 Edge, pretrained on the private Grounding20M dataset.