🤖 AI Summary
Deep learning models exhibit significantly weaker out-of-distribution generalization than humans—particularly in contour integration, a core capability for object recognition—yet lack systematic attribution of this gap. Method: We construct a controllable fragmented stimulus set and integrate human behavioral experiments (n=50), large-scale multimodel evaluation, and shape-bias analysis. We首次 identify a human-specific “directional contour integration preference” as the key mechanism underlying performance divergence, and confirm its causal role via targeted training interventions. Contribution/Results: We demonstrate that this preference strengthens monotonically with training data scale, with models approaching human-level contour integration only at ~5 billion samples. Explicitly modeling contour integration markedly improves robustness to shape bias and accuracy on fragmented object recognition—establishing a new benchmark and an interpretable pathway for visual representation learning.
📝 Abstract
Despite the tremendous success of deep learning in computer vision, models still fall behind humans in generalizing to new input distributions. Existing benchmarks do not investigate the specific failure points of models by analyzing performance under many controlled conditions. Our study systematically dissects where and why models struggle with contour integration -- a hallmark of human vision -- by designing an experiment that tests object recognition under various levels of object fragmentation. Humans (n=50) perform at high accuracy, even with few object contours present. This is in contrast to models which exhibit substantially lower sensitivity to increasing object contours, with most of the over 1,000 models we tested barely performing above chance. Only at very large scales ($sim5B$ training dataset size) do models begin to approach human performance. Importantly, humans exhibit an integration bias -- a preference towards recognizing objects made up of directional fragments over directionless fragments. We find that not only do models that share this property perform better at our task, but that this bias also increases with model training dataset size, and training models to exhibit contour integration leads to high shape bias. Taken together, our results suggest that contour integration is a hallmark of object vision that underlies object recognition performance, and may be a mechanism learned from data at scale.