Aligning Machine and Human Visual Representations across Abstraction Levels

📅 2024-09-10
🏛️ arXiv.org
📈 Citations: 13
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks fail to replicate the human hierarchical conceptual structure—from fine-grained to coarse-grained abstractions—resulting in inferior generalization and out-of-distribution (OOD) robustness compared to human vision. Method: We propose a cross-level human-alignment framework that, for the first time, systematically identifies and corrects representational misalignments between models and humans across multiple levels of semantic abstraction. Leveraging knowledge distillation, we transfer multi-level semantic structures from a teacher model—designed to emulate human judgment—to pretrained vision foundation models. To support this, we introduce the first human similarity-judgment dataset annotated across abstraction levels. Contribution: Our approach significantly improves model fidelity to human behavioral patterns and uncertainty responses in similarity tasks. It enhances generalization and OOD robustness across multiple benchmarks, establishing a novel paradigm for developing cognitively aligned, general-purpose visual representations.

Technology Category

Application Category

📝 Abstract
Deep neural networks have achieved success across a wide range of applications, including as models of human behavior in vision tasks. However, neural network training and human learning differ in fundamental ways, and neural networks often fail to generalize as robustly as humans do, raising questions regarding the similarity of their underlying representations. What is missing for modern learning systems to exhibit more human-like behavior? We highlight a key misalignment between vision models and humans: whereas human conceptual knowledge is hierarchically organized from fine- to coarse-scale distinctions, model representations do not accurately capture all these levels of abstraction. To address this misalignment, we first train a teacher model to imitate human judgments, then transfer human-like structure from its representations into pretrained state-of-the-art vision foundation models. These human-aligned models more accurately approximate human behavior and uncertainty across a wide range of similarity tasks, including a new dataset of human judgments spanning multiple levels of semantic abstractions. They also perform better on a diverse set of machine learning tasks, increasing generalization and out-of-distribution robustness. Thus, infusing neural networks with additional human knowledge yields a best-of-both-worlds representation that is both more consistent with human cognition and more practically useful, thus paving the way toward more robust, interpretable, and human-like artificial intelligence systems.
Problem

Research questions and friction points this paper is trying to address.

Aligning machine and human visual representations across abstraction levels
Improving generalization and robustness in vision models
Transferring human hierarchical knowledge to neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teacher model imitates human judgments
Transfer human-aligned structure to vision models
Fine-tuning improves generalization and robustness
🔎 Similar Papers
No similar papers found.
Lukas Muttenthaler
Lukas Muttenthaler
TU Berlin & Google DeepMind
Machine LearningRepresentation LearningAI AlignmentComputer VisionCognitive Science
Klaus Greff
Klaus Greff
Research Scientist at Google Brain
Machine LearningNeural Networks
F
Frieda Born
Machine Learning Group, Technische Universität Berlin
B
Bernhard Spitzer
Max Planck Institute for Human Development
Simon Kornblith
Simon Kornblith
Anthropic
M
M. C. Mozer
Google DeepMind
K
Klaus-Robert Muller
Machine Learning Group, Technische Universität Berlin
Thomas Unterthiner
Thomas Unterthiner
Google DeepMind
A
Andrew K. Lampinen
Google DeepMind