DWCL: Dual-Weighted Contrastive Learning for Multi-View Clustering

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unreliable cross-view pairings and representation degradation in multi-view clustering, this paper proposes a dual-weighted contrastive learning framework. First, we introduce the novel Best-Other contrast mechanism, which explicitly models discriminative relationships between the optimal positive sample and suboptimal negative samples. Second, we design a dual-weighting strategy that jointly incorporates view quality and inter-view dissimilarity; we theoretically prove that this strategy mitigates representation degradation and enhances clustering consistency. Our method integrates contrastive learning, multi-view representation learning, weighted loss design, and convergence analysis. Extensive experiments on eight standard multi-view benchmark datasets demonstrate significant improvements over state-of-the-art methods: clustering accuracy increases by 5.4% on Caltech6V7 and 5.6% on MSRCv1. The framework achieves superior robustness and generalizability across diverse multi-view clustering scenarios.

Technology Category

Application Category

📝 Abstract
Multi-view contrastive clustering (MVCC) has gained significant attention for generating consistent clustering structures from multiple views through contrastive learning. However, most existing MVCC methods create cross-views by combining any two views, leading to a high volume of unreliable pairs. Furthermore, these approaches often overlook discrepancies in multi-view representations, resulting in representation degeneration. To address these challenges, we introduce a novel model called Dual-Weighted Contrastive Learning (DWCL) for Multi-View Clustering. Specifically, to reduce the impact of unreliable cross-views, we introduce an innovative Best-Other (B-O) contrastive mechanism that enhances the representation of individual views at a low computational cost. Furthermore, we develop a dual weighting strategy that combines a view quality weight, reflecting the quality of each view, with a view discrepancy weight. This approach effectively mitigates representation degeneration by downplaying cross-views that are both low in quality and high in discrepancy. We theoretically validate the efficiency of the B-O contrastive mechanism and the effectiveness of the dual weighting strategy. Extensive experiments demonstrate that DWCL outperforms previous methods across eight multi-view datasets, showcasing superior performance and robustness in MVCC. Specifically, our method achieves absolute accuracy improvements of 5.4% and 5.6% compared to state-of-the-art methods on the Caltech6V7 and MSRCv1 datasets, respectively.
Problem

Research questions and friction points this paper is trying to address.

Multi-View Clustering
MVCC Reliability
View Discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-weighted Contrastive Learning
Multi-view Contrastive Clustering
Best-Other Contrast
🔎 Similar Papers
No similar papers found.
Z
Zhihui Zhang
Beijing Institute of Technology, Beijing, China
Xiaoshuai Hao
Xiaoshuai Hao
Beijing Academy of Artificial Intelligence,BAAI
vision and language
H
Hanning Yuan
Beijing Institute of Technology, Beijing, China
Lianhua Chi
Lianhua Chi
La Trobe University, Australia
Q
Qi Guo
Beijing Institute of Technology, Beijing, China
Sijie Ruan
Sijie Ruan
Assistant Professor, Beijing Institute of Technology
Spatio-temporal Data MiningVolunteered Geographic InformationUrban Computing
J
Jinhui Pang
Beijing Institute of Technology, Beijing, China