🤖 AI Summary
Existing generative brain network modeling approaches predominantly rely on graph neural networks (GNNs) that capture only structural topology, neglecting domain-specific cognitive characteristics. Method: We propose CogGNN—the first cognitively grounded generative model for brain networks—introducing visual memory and other cognitive functions into the network generation process. It jointly optimizes structural reconstruction and cognitive alignment via a novel visual-memory-aware loss function, and integrates multi-view brain connectomes with encoded visual stimuli. Contribution/Results: Evaluated on population-level connectome template generation, CogGNN significantly outperforms state-of-the-art methods. The generated templates exhibit superior structural connectivity fidelity and enhanced cognitive interpretability—demonstrated through task-fMRI activation patterns and behavioral correlates. This work establishes a new paradigm for cognition-driven brain network modeling, bridging neuroimaging, cognitive neuroscience, and deep generative modeling.
📝 Abstract
Generative learning has advanced network neuroscience, enabling tasks like graph super-resolution, temporal graph prediction, and multimodal brain graph fusion. However, current methods, mainly based on graph neural networks (GNNs), focus solely on structural and topological properties, neglecting cognitive traits. To address this, we introduce the first cognified generative model, CogGNN, which endows GNNs with cognitive capabilities (e.g., visual memory) to generate brain networks that preserve cognitive features. While broadly applicable, we present CogGNN, a specific variant designed to integrate visual input, a key factor in brain functions like pattern recognition and memory recall. As a proof of concept, we use our model to learn connectional brain templates (CBTs), population-level fingerprints from multi-view brain networks. Unlike prior work that overlooks cognitive properties, CogGNN generates CBTs that are both cognitively and structurally meaningful. Our contributions are: (i) a novel cognition-aware generative model with a visual-memory-based loss; (ii) a CBT-learning framework with a co-optimization strategy to yield well-centered, discriminative, cognitively enhanced templates. Extensive experiments show that CogGNN outperforms state-of-the-art methods, establishing a strong foundation for cognitively grounded brain network modeling.