DiveUp: Learning Feature Upsampling from Diverse Vision Foundation Models

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature upsampling methods rely on a single vision foundation model, making them susceptible to positional shifts and high-norm artifacts that distort spatial structure. This work proposes DiveUp, a novel upsampling framework that pioneers a collaborative paradigm leveraging multiple vision foundation models. DiveUp constructs a cross-model universal relational representation via a local Center-of-Mass (COM) field and employs a spike-aware selection strategy to dynamically identify reliable expert models. The upsampler is jointly trained in an encoder-agnostic manner, eliminating dependence on any single model and enabling a unified, model-agnostic architecture. Extensive experiments demonstrate state-of-the-art performance across diverse dense prediction tasks, validating the efficacy of the multi-expert relational guidance mechanism without requiring per-model retraining.

Technology Category

Application Category

📝 Abstract
Recently, feature upsampling has gained increasing attention owing to its effectiveness in enhancing vision foundation models (VFMs) for pixel-level understanding tasks. Existing methods typically rely on high-resolution features from the same foundation model to achieve upsampling via self-reconstruction. However, relying solely on intra-model features forces the upsampler to overfit to the source model's inherent location misalignment and high-norm artifacts. To address this fundamental limitation, we propose DiveUp, a novel framework that breaks away from single-model dependency by introducing multi-VFM relational guidance. Instead of naive feature fusion, DiveUp leverages diverse VFMs as a panel of experts, utilizing their structural consensus to regularize the upsampler's learning process, effectively preventing the propagation of inaccurate spatial structures from the source model. To reconcile the unaligned feature spaces across different VFMs, we propose a universal relational feature representation, formulated as a local center-of-mass (COM) field, that extracts intrinsic geometric structures, enabling seamless cross-model interaction. Furthermore, we introduce a spikiness-aware selection strategy that evaluates the spatial reliability of each VFM, effectively filtering out high-norm artifacts to aggregate guidance from only the most reliable expert at each local region. DiveUp is a unified, encoder-agnostic framework; a jointly-trained model can universally upsample features from diverse VFMs without requiring per-model retraining. Extensive experiments demonstrate that DiveUp achieves state-of-the-art performance across various downstream dense prediction tasks, validating the efficacy of multi-expert relational guidance. Our code and models are available at: https://github.com/Xiaoqiong-Liu/DiveUp
Problem

Research questions and friction points this paper is trying to address.

feature upsampling
vision foundation models
location misalignment
high-norm artifacts
pixel-level understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-VFM relational guidance
feature upsampling
center-of-mass field
spikiness-aware selection
encoder-agnostic framework
🔎 Similar Papers
No similar papers found.
X
Xiaoqiong Liu
Department of Computer Science and Engineering, University of North Texas
Heng Fan
Heng Fan
Assistant Professor, University of North Texas
Computer VisionMachine LearningArtificial Intelligence