🤖 AI Summary
Deep neural networks fail to replicate the human hierarchical conceptual structure—from fine-grained to coarse-grained abstractions—resulting in inferior generalization and out-of-distribution (OOD) robustness compared to human vision.
Method: We propose a cross-level human-alignment framework that, for the first time, systematically identifies and corrects representational misalignments between models and humans across multiple levels of semantic abstraction. Leveraging knowledge distillation, we transfer multi-level semantic structures from a teacher model—designed to emulate human judgment—to pretrained vision foundation models. To support this, we introduce the first human similarity-judgment dataset annotated across abstraction levels.
Contribution: Our approach significantly improves model fidelity to human behavioral patterns and uncertainty responses in similarity tasks. It enhances generalization and OOD robustness across multiple benchmarks, establishing a novel paradigm for developing cognitively aligned, general-purpose visual representations.
📝 Abstract
Deep neural networks have achieved success across a wide range of applications, including as models of human behavior in vision tasks. However, neural network training and human learning differ in fundamental ways, and neural networks often fail to generalize as robustly as humans do, raising questions regarding the similarity of their underlying representations. What is missing for modern learning systems to exhibit more human-like behavior? We highlight a key misalignment between vision models and humans: whereas human conceptual knowledge is hierarchically organized from fine- to coarse-scale distinctions, model representations do not accurately capture all these levels of abstraction. To address this misalignment, we first train a teacher model to imitate human judgments, then transfer human-like structure from its representations into pretrained state-of-the-art vision foundation models. These human-aligned models more accurately approximate human behavior and uncertainty across a wide range of similarity tasks, including a new dataset of human judgments spanning multiple levels of semantic abstractions. They also perform better on a diverse set of machine learning tasks, increasing generalization and out-of-distribution robustness. Thus, infusing neural networks with additional human knowledge yields a best-of-both-worlds representation that is both more consistent with human cognition and more practically useful, thus paving the way toward more robust, interpretable, and human-like artificial intelligence systems.