🤖 AI Summary
This study addresses two key challenges in multimodal brain imaging fusion: strong inter-modal negative interference and difficulty in extracting heterogeneous structural information from graph representations. To enable precise autism spectrum disorder (ASD) prediction and biomarker discovery, we propose an interpretable graph learning framework. Methodologically, we introduce a novel multimodal graph embedding module—comprising adaptive functional and supervised graph generation—and a multi-kernel graph learning module that integrates cross-scale convolutional aggregation with cross-kernel tensor fusion, enabling end-to-end modeling. Evaluated on the ABIDE dataset, our framework significantly outperforms state-of-the-art methods. It identifies highly discriminative brain regions—including the default mode network and amygdala—shedding light on underlying neuropathological mechanisms. Furthermore, it delivers interpretable, cross-modal neuroimaging biomarkers with clinical relevance for ASD diagnosis and stratification.
📝 Abstract
Due to its complexity, graph learning-based multi-modal integration and classification is one of the most challenging obstacles for disease prediction. To effectively offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs, we propose a novel method called MMKGL (Multi-modal Multi-Kernel Graph Learning). For the problem of negative impact between modalities, we propose a multi-modal graph embedding module to construct a multi-modal graph. Different from conventional methods that manually construct static graphs for all modalities, each modality generates a separate graph by adaptive learning, where a function graph and a supervision graph are introduced for optimization during the multi-graph fusion embedding process. We then propose a multi-kernel graph learning module to extract heterogeneous information from the multi-modal graph. The information in the multi-modal graph at different levels is aggregated by convolutional kernels with different receptive field sizes, followed by generating a cross-kernel discovery tensor for disease prediction. Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods. In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.