Feature, Alignment, and Supervision in Category Learning: A Comparative Approach with Children and Neural Networks

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how human children and convolutional neural networks (CNNs) learn object categories under sparse labeling conditions, with a focus on the interplay among supervision signals, feature types (size, shape, pattern), and perceptual alignment. Using a species-fair experimental framework, the authors systematically compare learning behaviors through few-shot semi-supervised tasks and controlled variable paradigms. The findings reveal that children rapidly generalize from minimal labels, yet their performance is significantly modulated by feature biases and alignment fidelity. In contrast, CNNs exhibit improved accuracy with increased supervision, but this gain is constrained by the underlying feature structure and alignment quality. This work provides the first evidence of fundamental mechanistic differences in category learning between humans and artificial systems, underscoring the necessity of comparing interactive learning mechanisms rather than relying solely on aggregate accuracy metrics.

Technology Category

Application Category

📝 Abstract
Understanding how humans and machines learn from sparse data is central to cognitive science and machine learning. Using a species-fair design, we compare children and convolutional neural networks (CNNs) in a few-shot semi-supervised category learning task. Both learners are exposed to novel object categories under identical conditions. Learners receive mixtures of labeled and unlabeled exemplars while we vary supervision (1/3/6 labels), target feature (size, shape, pattern), and perceptual alignment (high/low). We find that children generalize rapidly from minimal labels but show strong feature-specific biases and sensitivity to alignment. CNNs show a different interaction profile: added supervision improves performance, but both alignment and feature structure moderate the impact additional supervision has on learning. These results show that human-model comparisons must be drawn under the right conditions, emphasizing interactions among supervision, feature structure, and alignment rather than overall accuracy.
Problem

Research questions and friction points this paper is trying to address.

category learning
few-shot learning
semi-supervised learning
feature bias
perceptual alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

few-shot learning
semi-supervised learning
species-fair comparison
feature bias
perceptual alignment
🔎 Similar Papers
No similar papers found.