MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-view clustering methods typically employ view-level graph fusion, which operates at a coarse granularity and suffers from limited representational capacity. To address this, we propose a fine-grained, sample-level graph fusion framework integrated with contrastive learning. First, we construct self-loop graphs to capture local sample structures, then introduce a Sample-level Mixture of Ego-Graphs Fusion (MoEGF) module that dynamically and adaptively fuses multi-view graph structures at the ego-graph level. Second, we design an Ego-Graph Contrastive Learning (EGCL) module that enforces intra-class compactness and inter-class separability by pulling together ego-graphs of same-class samples while pushing apart those of different classes. Our approach synergistically integrates graph neural networks, Mixture-of-Experts (MoE) gating mechanisms, and contrastive learning to significantly enhance clustering representation capability. Extensive experiments on multiple standard multi-view benchmarks demonstrate state-of-the-art performance. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
In recent years, the advancement of Graph Neural Networks (GNNs) has significantly propelled progress in Multi-View Clustering (MVC). However, existing methods face the problem of coarse-grained graph fusion. Specifically, current approaches typically generate a separate graph structure for each view and then perform weighted fusion of graph structures at the view level, which is a relatively rough strategy. To address this limitation, we present a novel Mixture of Ego-Graphs Contrastive Representation Learning (MoEGCL). It mainly consists of two modules. In particular, we propose an innovative Mixture of Ego-Graphs Fusion (MoEGF), which constructs ego graphs and utilizes a Mixture-of-Experts network to implement fine-grained fusion of ego graphs at the sample level, rather than the conventional view-level fusion. Additionally, we present the Ego Graph Contrastive Learning (EGCL) module to align the fused representation with the view-specific representation. The EGCL module enhances the representation similarity of samples from the same cluster, not merely from the same sample, further boosting fine-grained graph representation. Extensive experiments demonstrate that MoEGCL achieves state-of-the-art results in deep multi-view clustering tasks. The source code is publicly available at https://github.com/HackerHyper/MoEGCL.
Problem

Research questions and friction points this paper is trying to address.

Addresses coarse-grained graph fusion in multi-view clustering
Implements fine-grained fusion at sample level using ego-graphs
Enhances representation similarity for samples within same clusters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts network enables fine-grained ego-graph fusion
Sample-level fusion replaces conventional view-level graph fusion
Ego-graph contrastive learning enhances cross-view representation alignment
🔎 Similar Papers
No similar papers found.
J
Jian Zhu
Zhejiang Lab, Hangzhou, China
X
Xin Zou
Hong Kong University of Science and Technology, Guangzhou, China
J
Jun Sun
Zhejiang Lab, Hangzhou, China
C
Cheng Luo
Zhejiang Lab, Hangzhou, China
L
Lei Liu
University of Science and Technology of China, Hefei, China
Lingfang Zeng
Lingfang Zeng
Professor, Zhejiang Lab
AI ChipNon-Volatile MemoriesSupercomputing Storageand Privacy-enhanced Information Storage
N
Ning Zhang
Zhejiang Lab, Hangzhou, China
Bian Wu
Bian Wu
East China Normal University
E-learning designproblem-solving learningmedical education
Chang Tang
Chang Tang
Senior Member of IEEE/CCF/CSIG, School of Software Engineering, HUST, Wuhan, China.
Machine LearningPattern Recognition
L
Lirong Dai
University of Science and Technology of China, Hefei, China