Uncertainty Quantification for Incomplete Multi-View Data Using Divergence Measures

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate uncertainty quantification in incomplete multi-view data caused by noise and inter-modal discrepancies, this paper proposes KPHD-Net. The method replaces the KL divergence with the Hölder divergence for more robust measurement of categorical probability distribution discrepancies; models uncertainty via a variational Dirichlet distribution; and integrates Dempster–Shafer evidence theory with Kalman filtering to enable dynamic, robust fusion of multi-view information and future state prediction. Theoretical analysis guarantees consistency and convergence of the uncertainty estimation. Extensive experiments on multi-view classification and clustering tasks demonstrate that KPHD-Net significantly improves accuracy, noise robustness, and predictive reliability, comprehensively outperforming existing state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Existing multi-view classification and clustering methods typically improve task accuracy by leveraging and fusing information from different views. However, ensuring the reliability of multi-view integration and final decisions is crucial, particularly when dealing with noisy or corrupted data. Current methods often rely on Kullback-Leibler (KL) divergence to estimate uncertainty of network predictions, ignoring domain gaps between different modalities. To address this issue, KPHD-Net, based on Hölder divergence, is proposed for multi-view classification and clustering tasks. Generally, our KPHD-Net employs a variational Dirichlet distribution to represent class probability distributions, models evidences from different views, and then integrates it with Dempster-Shafer evidence theory (DST) to improve uncertainty estimation effects. Our theoretical analysis demonstrates that Proper Hölder divergence offers a more effective measure of distribution discrepancies, ensuring enhanced performance in multi-view learning. Moreover, Dempster-Shafer evidence theory, recognized for its superior performance in multi-view fusion tasks, is introduced and combined with the Kalman filter to provide future state estimations. This integration further enhances the reliability of the final fusion results. Extensive experiments show that the proposed KPHD-Net outperforms the current state-of-the-art methods in both classification and clustering tasks regarding accuracy, robustness, and reliability, with theoretical guarantees.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in incomplete multi-view data
Improve reliability of multi-view integration with noisy data
Address domain gaps in multi-view uncertainty estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hölder divergence for uncertainty quantification
Integrates Dempster-Shafer theory with Kalman filter
Employs variational Dirichlet for class distributions
🔎 Similar Papers
No similar papers found.
Zhipeng Xue
Zhipeng Xue
Post graduate at ShanghaiTech University
Compressed sensingSignal processingMachine learning
Y
Yan Zhang
MSU-BIT-SMBU Joint Research Center of Applied Mathematics, Shenzhen MSU-BIT University, Shenzhen, 518172, China
M
Ming Li
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, 518083, China
Chun Li
Chun Li
MD Anderson Cancer Center
diagnostic imagingdrug deliverynanotechnology
Y
Yue Liu
School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
F
Fei Yu
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, 518083, China