Do computer vision foundation models learn the low-level characteristics of the human visual system?

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether vision foundation models implicitly acquire human low-level visual properties—such as contrast detection, masking, and constancy—that underpin biological vision. Method: We designed nine standardized psychophysical test protocols and evaluated 45 state-of-the-art vision foundation and generative models across multiple quantitative, cross-model dimensions: contrast sensitivity functions, frequency-domain responses, and feature encoding consistency—constituting the first systematic human–model alignment assessment at this scale. Results: Most models exhibit reduced sensitivity to low-contrast stimuli and disordered frequency tuning. DINOv2 achieves the closest behavioral alignment with humans on contrast masking tasks, suggesting that self-supervised pretraining can implicitly capture certain biological visual mechanisms; DINO and OpenCLIP show partial alignment in specific subtasks. This work establishes a reproducible benchmark for perceptual evaluation and provides theoretical insights into model perception, advancing neuro-symbolic integration and embodied AI.

Technology Category

Application Category

📝 Abstract
Computer vision foundation models, such as DINO or OpenCLIP, are trained in a self-supervised manner on large image datasets. Analogously, substantial evidence suggests that the human visual system (HVS) is influenced by the statistical distribution of colors and patterns in the natural world, characteristics also present in the training data of foundation models. The question we address in this paper is whether foundation models trained on natural images mimic some of the low-level characteristics of the human visual system, such as contrast detection, contrast masking, and contrast constancy. Specifically, we designed a protocol comprising nine test types to evaluate the image encoders of 45 foundation and generative models. Our results indicate that some foundation models (e.g., DINO, DINOv2, and OpenCLIP), share some of the characteristics of human vision, but other models show little resemblance. Foundation models tend to show smaller sensitivity to low contrast and rather irregular responses to contrast across frequencies. The foundation models show the best agreement with human data in terms of contrast masking. Our findings suggest that human vision and computer vision may take both similar and different paths when learning to interpret images of the real world. Overall, while differences remain, foundation models trained on vision tasks start to align with low-level human vision, with DINOv2 showing the closest resemblance.
Problem

Research questions and friction points this paper is trying to address.

Assessing if computer vision models mimic human visual system
Evaluating low-level vision characteristics in foundation models
Comparing model responses to human contrast sensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised training on images
Protocol tests human vision characteristics
Models align with low-level human vision