Test-Time Canonicalization by Foundation Models for Robust Perception

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world visual perception demands invariance to geometric and photometric transformations—such as rotation, illumination variation, and color shifts—but existing approaches rely either on architecture-specific designs or predefined data augmentations, limiting generalizability. To address this, we propose FOCAL, the first test-time framework that leverages internet-scale priors from foundation models (e.g., CLIP, SAM) to generate and optimize candidate transformations, mapping inputs to canonical “normalized” views without fine-tuning or architectural modification. FOCAL thus enables data-driven normalization grounded in semantic and geometric consistency, eliminating dependence on transformation-specific training data. It offers a scalable pathway to robustness and facilitates novel applications such as active vision. Experiments demonstrate substantial improvements in perception robustness for CLIP and SAM under 2D/3D rotations, contrast variations, chromatic biases, and day–night domain shifts.

Technology Category

Application Category

📝 Abstract
Real-world visual perception requires invariance to diverse transformations, yet current methods rely heavily on specialized architectures or training on predefined augmentations, limiting generalization. We propose FOCAL, a test-time, data-driven framework that achieves robust perception by leveraging internet-scale visual priors from foundation models. By generating and optimizing candidate transformations toward visually typical, "canonical" views, FOCAL enhances robustness without re-training or architectural changes. Our experiments demonstrate improved robustness of CLIP and SAM across challenging transformations, including 2D/3D rotations, illumination shifts (contrast and color), and day-night variations. We also highlight potential applications in active vision. Our approach challenges the assumption that transform-specific training is necessary, instead offering a scalable path to invariance. Our code is available at: https://github.com/sutkarsh/focal.
Problem

Research questions and friction points this paper is trying to address.

Achieves robust perception without specialized architectures
Enhances robustness without re-training or architectural changes
Challenges need for transform-specific training with scalable invariance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages foundation models for visual priors
Optimizes transformations to canonical views
Enhances robustness without retraining models
🔎 Similar Papers
No similar papers found.