Biologically Plausible Brain Graph Transformer

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing brain graph analysis methods neglect the small-world architecture—characterized by hub nodes and functional modules—leading to insufficient biological plausibility and suboptimal disease detection performance. To address this, we propose the Biologically Plausible Brain Graph Transformer (BioBGT), the first model to integrate network-entanglement-driven node importance encoding with functional-module-aware self-attention, enabling joint modeling of global information propagation and the functional segregation/integration principles within a Transformer framework. BioBGT synergistically combines graph neural networks, modular clustering-guided attention masking, and biologically constrained graph encoding. Evaluated on three public neuroimaging datasets for brain disorder classification, BioBGT achieves significant improvements over state-of-the-art methods, yielding an average accuracy gain of 3.2% in Alzheimer’s disease and schizophrenia classification. Crucially, its learned representations are validated by neuroscience evidence, demonstrating strong biological interpretability.

Technology Category

Application Category

📝 Abstract
State-of-the-art brain graph analysis methods fail to fully encode the small-world architecture of brain graphs (accompanied by the presence of hubs and functional modules), and therefore lack biological plausibility to some extent. This limitation hinders their ability to accurately represent the brain's structural and functional properties, thereby restricting the effectiveness of machine learning models in tasks such as brain disorder detection. In this work, we propose a novel Biologically Plausible Brain Graph Transformer (BioBGT) that encodes the small-world architecture inherent in brain graphs. Specifically, we present a network entanglement-based node importance encoding technique that captures the structural importance of nodes in global information propagation during brain graph communication, highlighting the biological properties of the brain structure. Furthermore, we introduce a functional module-aware self-attention to preserve the functional segregation and integration characteristics of brain graphs in the learned representations. Experimental results on three benchmark datasets demonstrate that BioBGT outperforms state-of-the-art models, enhancing biologically plausible brain graph representations for various brain graph analytical tasks
Problem

Research questions and friction points this paper is trying to address.

Encode small-world brain graph architecture
Capture node importance in global propagation
Preserve functional segregation and integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Biologically Plausible Brain Graph Transformer
Network entanglement-based node encoding
Functional module-aware self-attention
🔎 Similar Papers
No similar papers found.