🤖 AI Summary
Existing multi-view clustering methods typically employ view-level graph fusion, which operates at a coarse granularity and suffers from limited representational capacity. To address this, we propose a fine-grained, sample-level graph fusion framework integrated with contrastive learning. First, we construct self-loop graphs to capture local sample structures, then introduce a Sample-level Mixture of Ego-Graphs Fusion (MoEGF) module that dynamically and adaptively fuses multi-view graph structures at the ego-graph level. Second, we design an Ego-Graph Contrastive Learning (EGCL) module that enforces intra-class compactness and inter-class separability by pulling together ego-graphs of same-class samples while pushing apart those of different classes. Our approach synergistically integrates graph neural networks, Mixture-of-Experts (MoE) gating mechanisms, and contrastive learning to significantly enhance clustering representation capability. Extensive experiments on multiple standard multi-view benchmarks demonstrate state-of-the-art performance. The source code is publicly available.
📝 Abstract
In recent years, the advancement of Graph Neural Networks (GNNs) has significantly propelled progress in Multi-View Clustering (MVC). However, existing methods face the problem of coarse-grained graph fusion. Specifically, current approaches typically generate a separate graph structure for each view and then perform weighted fusion of graph structures at the view level, which is a relatively rough strategy. To address this limitation, we present a novel Mixture of Ego-Graphs Contrastive Representation Learning (MoEGCL). It mainly consists of two modules. In particular, we propose an innovative Mixture of Ego-Graphs Fusion (MoEGF), which constructs ego graphs and utilizes a Mixture-of-Experts network to implement fine-grained fusion of ego graphs at the sample level, rather than the conventional view-level fusion. Additionally, we present the Ego Graph Contrastive Learning (EGCL) module to align the fused representation with the view-specific representation. The EGCL module enhances the representation similarity of samples from the same cluster, not merely from the same sample, further boosting fine-grained graph representation. Extensive experiments demonstrate that MoEGCL achieves state-of-the-art results in deep multi-view clustering tasks. The source code is publicly available at https://github.com/HackerHyper/MoEGCL.