đ¤ AI Summary
This study investigates whether self-supervised vision models spontaneously develop human-like Gestalt perceptionâspecifically illusory contour completion, convexity preference, and dynamic figure-ground segregationâand examines the necessity of global spatial structure modeling. We introduce DiSRT (Distorted Spatial Relationship Testbench), the first diagnostic benchmark to systematically evaluate model sensitivity to core Gestalt principlesâincluding closure, proximity, and figure-ground assignment. Our experiments reveal that self-supervised pretraining (e.g., MAE) induces robust Gestalt perception, whereas subsequent supervised fine-tuning degrades it; reintroducing Top-K sparse activation effectively restores global spatial sensitivity. Notably, ViT and ConvNeXt models evaluated on DiSRT outperform supervised baselines, with some metrics exceeding human performance. These results demonstrate that Gestalt organization does not require attention mechanisms per se and is instead modulated by training paradigmsâhighlighting the critical role of objective design in shaping perceptual priors.
đ Abstract
Human vision organizes local cues into coherent global forms using Gestalt principles like closure, proximity, and figure-ground assignment -- functions reliant on global spatial structure. We investigate whether modern vision models show similar behaviors, and under what training conditions these emerge. We find that Vision Transformers (ViTs) trained with Masked Autoencoding (MAE) exhibit activation patterns consistent with Gestalt laws, including illusory contour completion, convexity preference, and dynamic figure-ground segregation. To probe the computational basis, we hypothesize that modeling global dependencies is necessary for Gestalt-like organization. We introduce the Distorted Spatial Relationship Testbench (DiSRT), which evaluates sensitivity to global spatial perturbations while preserving local textures. Using DiSRT, we show that self-supervised models (e.g., MAE, CLIP) outperform supervised baselines and sometimes even exceed human performance. ConvNeXt models trained with MAE also exhibit Gestalt-compatible representations, suggesting such sensitivity can arise without attention architectures. However, classification finetuning degrades this ability. Inspired by biological vision, we show that a Top-K activation sparsity mechanism can restore global sensitivity. Our findings identify training conditions that promote or suppress Gestalt-like perception and establish DiSRT as a diagnostic for global structure sensitivity across models.