JAFAR: Jack up Any Feature at Any Resolution

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-resolution features from foundational vision encoders are ill-suited for dense vision tasks requiring high spatial fidelity. To address this, we propose a lightweight, flexible, and supervision-free arbitrary-scale upsampling method that operates without high-resolution ground-truth annotations. Our approach jointly leverages semantic-aligned attention and Spatial Feature Transform (SFT)-based modulation, guided by low-level image features to synthesize high-resolution queries—enabling precise feature reconstruction from any low-resolution input to any target resolution. Crucially, it introduces the first cross-scale generalizable upsampling mechanism, balancing semantic consistency and fine-grained detail recovery. Experiments demonstrate substantial improvements over state-of-the-art methods across multiple dense prediction tasks—including semantic segmentation and depth estimation. Notably, models trained solely at small scales generalize effectively to upscaling factors as high as 32×, accurately restoring intricate spatial structures.

Technology Category

Application Category

📝 Abstract
Foundation Vision Encoders have become essential for a wide range of dense vision tasks. However, their low-resolution spatial feature outputs necessitate feature upsampling to produce the high-resolution modalities required for downstream tasks. In this work, we introduce JAFAR, a lightweight and flexible feature upsampler that enhances the spatial resolution of visual features from any Foundation Vision Encoder to an arbitrary target resolution. JAFAR employs an attention-based module designed to promote semantic alignment between high-resolution queries, derived from low-level image features, and semantically enriched low-resolution keys, using Spatial Feature Transform (SFT) modulation. Notably, despite the absence of high-resolution supervision, we demonstrate that learning at low upsampling ratios and resolutions generalizes remarkably well to significantly higher output scales. Extensive experiments show that JAFAR effectively recovers fine-grained spatial details and consistently outperforms existing feature upsampling methods across a diverse set of downstream tasks. Project page at https://jafar-upsampler.github.io
Problem

Research questions and friction points this paper is trying to address.

Enhance spatial resolution of Foundation Vision Encoders
Achieve semantic alignment in feature upsampling
Generalize low-resolution learning to high-resolution outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight attention-based feature upsampler
Semantic alignment via SFT modulation
Generalizes from low to high resolution
🔎 Similar Papers
No similar papers found.
P
Paul Couairon
Thales, TSGF, cortAIx Labs, France
Loïck Chambon
Loïck Chambon
PhD Student - Sorbonne University & Valeo.ai
Computer Vision
Louis Serrano
Louis Serrano
Sorbonne Université - ISIR
Deep Learning for Physics
J
Jean-Emmanuel Haugeard
Thales, TSGF, cortAIx Labs, France
M
M. Cord
Sorbonne Université, CNRS, ISIR, F-75005 Paris, France; Valeo.ai
Nicolas Thome
Nicolas Thome
Professor of Computer Science, Sorbonne University, France
Machine LearningDeep LearningComputer Vision