🤖 AI Summary
Deep neural networks excel in visual tasks yet exhibit a notable gap in aligning with human visual behaviors, such as error consistency and shape bias. This work proposes applying low-pass filtering—e.g., Gaussian blur—to input images during inference, significantly enhancing model–human alignment without retraining. The approach reveals that the alignment advantage of generative models stems from their implicit low-pass filtering properties and demonstrates, for the first time, that a simple blurring operation can nearly halve the alignment gap. Through contrast sensitivity analysis, spectral matching, and Pareto-optimal front evaluation, we achieve a new state of the art on the model-vs-human benchmark, with the optimal filter spectrum closely matching the bandpass characteristics of the human visual system.
📝 Abstract
Despite their impressive performance on computer vision benchmarks, Deep Neural Networks (DNNs) still fall short of adequately modeling human visual behavior, as measured by error consistency and shape bias. Recent work hypothesized that behavioral alignment can be drastically improved through \emph{generative} -- rather than \emph{discriminative} -- classifiers, with far-reaching implications for models of human vision. Here, we instead show that the increased alignment of generative models can be largely explained by a seemingly innocuous resizing operation in the generative model which effectively acts as a low-pass filter. In a series of controlled experiments, we show that removing high-frequency spatial information from discriminative models like CLIP drastically increases their behavioral alignment. Simply blurring images at test-time -- rather than training on blurred images -- achieves a new state-of-the-art score on the model-vs-human benchmark, halving the current alignment gap between DNNs and human observers. Furthermore, low-pass filters are likely optimal, which we demonstrate by directly optimizing filters for alignment. To contextualize the performance of optimal filters, we compute the frontier of all possible pareto-optimal solutions to the benchmark, which was formerly unknown. We explain our findings by observing that the frequency spectrum of optimal Gaussian filters roughly matches the spectrum of band-pass filters implemented by the human visual system. We show that the contrast sensitivity function, describing the inverse of the contrast threshold required for humans to detect a sinusoidal grating as a function of spatiotemporal frequency, is approximated well by Gaussian filters of the specific width that also maximizes error consistency.