Multi-modal Multi-kernel Graph Learning for Autism Prediction and Biomarker Discovery

📅 2023-03-03
🏛️ IEEE Transactions on Computational Biology and Bioinformatics
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two key challenges in multimodal brain imaging fusion: strong inter-modal negative interference and difficulty in extracting heterogeneous structural information from graph representations. To enable precise autism spectrum disorder (ASD) prediction and biomarker discovery, we propose an interpretable graph learning framework. Methodologically, we introduce a novel multimodal graph embedding module—comprising adaptive functional and supervised graph generation—and a multi-kernel graph learning module that integrates cross-scale convolutional aggregation with cross-kernel tensor fusion, enabling end-to-end modeling. Evaluated on the ABIDE dataset, our framework significantly outperforms state-of-the-art methods. It identifies highly discriminative brain regions—including the default mode network and amygdala—shedding light on underlying neuropathological mechanisms. Furthermore, it delivers interpretable, cross-modal neuroimaging biomarkers with clinical relevance for ASD diagnosis and stratification.
📝 Abstract
Due to its complexity, graph learning-based multi-modal integration and classification is one of the most challenging obstacles for disease prediction. To effectively offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs, we propose a novel method called MMKGL (Multi-modal Multi-Kernel Graph Learning). For the problem of negative impact between modalities, we propose a multi-modal graph embedding module to construct a multi-modal graph. Different from conventional methods that manually construct static graphs for all modalities, each modality generates a separate graph by adaptive learning, where a function graph and a supervision graph are introduced for optimization during the multi-graph fusion embedding process. We then propose a multi-kernel graph learning module to extract heterogeneous information from the multi-modal graph. The information in the multi-modal graph at different levels is aggregated by convolutional kernels with different receptive field sizes, followed by generating a cross-kernel discovery tensor for disease prediction. Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods. In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.
Problem

Research questions and friction points this paper is trying to address.

Multi-modal integration challenges
Negative impact between modalities
Autism prediction and biomarker discovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal graph embedding module
Multi-kernel graph learning module
Cross-kernel discovery tensor
🔎 Similar Papers
No similar papers found.
J
Junbin Mao
The Hunan Province Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha 410083, China
J
Jin Liu
The Hunan Province Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha 410083, China
H
Han Lin
The School of Science and Engineering, University of Dundee, DD1 4HN Dundee, United Kingdom
Hulin Kuang
Hulin Kuang
Central South University
medical image processingintelligent transportation systemsdeep learningmachine learning
Y
Yi Pan
The Faculty of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China