🤖 AI Summary
Existing multi-view learning models suffer from performance degradation in open-set scenarios, primarily due to an implicit assumption of class completeness and spurious view-label correlations induced during training—leading to static view bias. To address this, we propose O-Mix, a novel framework featuring three key innovations: (1) an O-Mix sample synthesis strategy that generates controllably ambiguous virtual samples to enhance model sensitivity to unknown classes; (2) a fuzziness-aware auxiliary network that explicitly models atypical patterns; and (3) an HSIC-based contrastive debiasing module that disentangles view-specific and view-consistent representations. Evaluated on multiple multi-view open-set benchmarks, O-Mix significantly improves unknown-class identification accuracy while maintaining state-of-the-art closed-set classification performance.
📝 Abstract
Existing multi-view learning models struggle in open-set scenarios due to their implicit assumption of class completeness. Moreover, static view-induced biases, which arise from spurious view-label associations formed during training, further degrade their ability to recognize unknown categories. In this paper, we propose a multi-view open-set learning framework via ambiguity uncertainty calibration and view-wise debiasing. To simulate ambiguous samples, we design O-Mix, a novel synthesis strategy to generate virtual samples with calibrated open-set ambiguity uncertainty. These samples are further processed by an auxiliary ambiguity perception network that captures atypical patterns for improved open-set adaptation. Furthermore, we incorporate an HSIC-based contrastive debiasing module that enforces independence between view-specific ambiguous and view-consistent representations, encouraging the model to learn generalizable features. Extensive experiments on diverse multi-view benchmarks demonstrate that the proposed framework consistently enhances unknown-class recognition while preserving strong closed-set performance.