Human-like Cognitive Generalization for Large Models via Brain-in-the-loop Supervision

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models exhibit significant limitations in human-like cognitive capabilities—particularly abstract reasoning, logical inference, and cross-scenario generalization. To address this, we propose *Circuit-Informed Supervised Learning*, a novel paradigm that leverages sparse fNIRS/EEG neural signals as cognitive supervision sources. Our approach integrates concept-alignment knowledge distillation, neurosemantic embedding mapping, and few-shot adaptive fine-tuning to enable interpretable transfer of human conceptual structures into large models. This framework transcends purely data-driven learning by grounding model representations in biologically informed cognitive priors. Empirically, it achieves substantial improvements on few-shot and zero-shot learning benchmarks, as well as out-of-distribution recognition tasks. Crucially, it yields concept representations with explicit cognitive interpretability—directly linking model semantics to human neural correlates. Our work establishes a principled pathway toward building intelligent systems endowed with human-level abstraction and generalization capabilities.

Technology Category

Application Category

📝 Abstract
Recent advancements in deep neural networks (DNNs), particularly large-scale language models, have demonstrated remarkable capabilities in image and natural language understanding. Although scaling up model parameters with increasing volume of training data has progressively improved DNN capabilities, achieving complex cognitive abilities - such as understanding abstract concepts, reasoning, and adapting to novel scenarios, which are intrinsic to human cognition - remains a major challenge. In this study, we show that brain-in-the-loop supervised learning, utilizing a small set of brain signals, can effectively transfer human conceptual structures to DNNs, significantly enhancing their comprehension of abstract and even unseen concepts. Experimental results further indicate that the enhanced cognitive capabilities lead to substantial performance gains in challenging tasks, including few-shot/zero-shot learning and out-of-distribution recognition, while also yielding highly interpretable concept representations. These findings highlight that human-in-the-loop supervision can effectively augment the complex cognitive abilities of large models, offering a promising pathway toward developing more human-like cognitive abilities in artificial systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing cognitive generalization in large models via brain signals
Improving abstract concept understanding in deep neural networks
Boosting few-shot and zero-shot learning with human-like cognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Brain-in-the-loop supervision enhances DNN cognition
Human signals transfer abstract concept understanding
Improved few-shot and zero-shot learning performance
🔎 Similar Papers
No similar papers found.
J
Jiaxuan Chen
College of Computer Science and Technology, Zhejiang University, China; State Key Lab of Brain-Machine Intelligence, Zhejiang University, China
Y
Yu Qi
State Key Lab of Brain-Machine Intelligence, Zhejiang University, China; Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China
Yueming Wang
Yueming Wang
Zhejiang University
Brain-computer InterfacePattern recognitionmachine learningneural signal processing
Gang Pan
Gang Pan
Tianjin University
Computer visionMultimodalAI