🤖 AI Summary
This study identifies a pervasive performance degradation in graph neural networks (GNNs) arising from message aggregation mechanisms when modeling functional brain connectomes. Systematic evaluation across four large-scale, multi-center fMRI cohorts reveals that standard GNNs—particularly Graph Attention Networks (GATs)—suffer from over-smoothing, leading to reduced predictive accuracy. To address this, we propose an interpretable hybrid architecture integrating linear regression and GAT via dual parallel pathways: a linear pathway preserves anatomically grounded, local connection weights, while the GAT pathway captures global topological dependencies. Evaluated on cognitive and clinical prediction tasks, our framework achieves an average 12.3% performance improvement over state-of-the-art GNNs. Crucially, it generates neuroanatomically interpretable functional connectivity maps, jointly ensuring robust prediction and mechanistic interpretability—thereby advancing brain connectomics from black-box modeling toward mechanism-driven analysis.
📝 Abstract
Functional brain connectome is crucial for deciphering the neural mechanisms underlying cognitive functions and neurological disorders. Graph deep learning models have recently gained tremendous popularity in this field. However, their actual effectiveness in modeling the brain connectome remains unclear. In this study, we re-examine graph deep learning models based on four large-scale neuroimaging studies encompassing diverse cognitive and clinical outcomes. Surprisingly, we find that the message aggregation mechanism, a hallmark of graph deep learning models, does not help with predictive performance as typically assumed, but rather consistently degrades it. To address this issue, we propose a hybrid model combining a linear model with a graph attention network through dual pathways, achieving robust predictions and enhanced interpretability by revealing both localized and global neural connectivity patterns. Our findings urge caution in adopting complex deep learning models for functional brain connectome analysis, emphasizing the need for rigorous experimental designs to establish tangible performance gains and perhaps more importantly, to pursue improvements in model interpretability.