Mixture of Cognitive Reasoners: Modular Reasoning with Brain-Like Specialization

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited interpretability and controllability of large language models (LLMs) by proposing a brain-inspired modular Transformer architecture. Motivated by functional parcellation in the human brain—specifically language, logical reasoning, social cognition, and memory subsystems—the architecture partitions Transformer layers into four specialized expert modules. We introduce cognitive-guided curriculum learning, module-level routing, and inference-style intervention mechanisms to foster functional specialization. To our knowledge, this is the first work to systematically integrate neuroscientific functional atlases into LLM architectural design, enabling fine-grained, controllable reasoning. Experiments across seven reasoning benchmarks demonstrate significant improvements over non-specialized baselines. Ablation studies show marked performance degradation in domain-specific tasks upon removal of corresponding modules, confirming functional specificity. Moreover, the architecture supports real-time inference-path modulation, enhancing both interpretability and controllability.

Technology Category

Application Category

📝 Abstract
Human intelligence emerges from the interaction of specialized brain networks, each dedicated to distinct cognitive functions such as language processing, logical reasoning, social understanding, and memory retrieval. Inspired by this biological observation, we introduce the Mixture of Cognitive Reasoners (MiCRo) architecture and training paradigm: a modular transformer-based language model with a training curriculum that encourages the emergence of functional specialization among different modules. Inspired by studies in neuroscience, we partition the layers of a pretrained transformer model into four expert modules, each corresponding to a well-studied cognitive brain network. Our Brain-Like model has three key benefits over the state of the art: First, the specialized experts are highly interpretable and functionally critical, where removing a module significantly impairs performance on domain-relevant benchmarks. Second, our model outperforms comparable baselines that lack specialization on seven reasoning benchmarks. And third, the model's behavior can be steered at inference time by selectively emphasizing certain expert modules (e.g., favoring social over logical reasoning), enabling fine-grained control over the style of its response. Our findings suggest that biologically inspired inductive biases involved in human cognition lead to significant modeling gains in interpretability, performance, and controllability.
Problem

Research questions and friction points this paper is trying to address.

Developing modular AI with brain-like specialized cognitive functions
Enhancing interpretability and performance in reasoning tasks
Enabling fine-grained control over model response styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular transformer model with specialized cognitive experts
Training curriculum for functional specialization emergence
Inference-time expert module steering for response control
🔎 Similar Papers
No similar papers found.