Joint Imaging-ROI Representation Learning via Cross-View Contrastive Alignment for Brain Disorder Classification

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of a unified framework for jointly modeling whole-brain imaging data and localized region-of-interest (ROI) graphs in neuroimaging classification, which has hindered systematic evaluation of their complementary contributions. To this end, the authors propose a cross-view contrastive alignment framework that jointly optimizes multi-view embeddings—extracted via 3D convolutional networks and graph neural networks—within a shared latent space through bidirectional contrastive objectives. This approach establishes, for the first time, a controllable and comparable joint learning paradigm that enables explicit fusion of global and local representations. Evaluated on the ADHD-200 and ABIDE datasets, the model consistently outperforms single-modality baselines across multiple backbone architectures. Further interpretability analyses corroborate the complementary nature of discriminative features learned by the dual branches.

Technology Category

Application Category

📝 Abstract
Brain imaging classification is commonly approached from two perspectives: modeling the full image volume to capture global anatomical context, or constructing ROI-based graphs to encode localized and topological interactions. Although both representations have demonstrated independent efficacy, their relative contributions and potential complementarity remain insufficiently understood. Existing fusion approaches are typically task-specific and do not enable controlled evaluation of each representation under consistent training settings. To address this gap, we propose a unified cross-view contrastive framework for joint imaging-ROI representation learning. Our method learns subject-level global (imaging) and local (ROI-graph) embeddings and aligns them in a shared latent space using a bidirectional contrastive objective, encouraging representations from the same subject to converge while separating those from different subjects. This alignment produces comparable embeddings suitable for downstream fusion and enables systematic evaluation of imaging-only, ROI-only, and joint configurations within a unified training protocol. Extensive experiments on the ADHD-200 and ABIDE datasets demonstrate that joint learning consistently improves classification performance over either branch alone across multiple backbone choices. Moreover, interpretability analyses reveal that imaging-based and ROI-based branches emphasize distinct yet complementary discriminative patterns, explaining the observed performance gains. These findings provide principled evidence that explicitly integrating global volumetric and ROI-level representations is a promising direction for neuroimaging-based brain disorder classification. The source code is available at https://anonymous.4open.science/r/imaging-roi-contrastive-152C/.
Problem

Research questions and friction points this paper is trying to address.

brain disorder classification
imaging-ROI representation
cross-view alignment
neuroimaging
representation fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-view contrastive learning
joint representation learning
brain disorder classification
ROI-graph
neuroimaging fusion
🔎 Similar Papers
No similar papers found.