π€ AI Summary
Existing methods inadequately model the intrinsic heterogeneity of object patterns across semantic spaces in heterogeneous information fusion. Method: This paper proposes a collaborative learning framework based on a unified heterogeneous multi-network, incorporating domain-specific encoders to capture structural and semantic characteristics of distinct semantic spaces, and introducing a large-margin collaboration mechanism that enforces prediction boundary constraints among expert models to enhance complementarity and robustness. Theoretical analysis establishes the optimization feasibility and convergence stability of the framework. Contribution/Results: Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods. The source code is publicly released, and ablation studies and cross-dataset evaluations confirm the frameworkβs generalizability and practical applicability.
π Abstract
Fusing heterogeneous information remains a persistent challenge in modern data analysis. While significant progress has been made, existing approaches often fail to account for the inherent heterogeneity of object patterns across different semantic spaces. To address this limitation, we propose the Cooperation of Experts (CoE) framework, which encodes multi-typed information into unified heterogeneous multiplex networks. By overcoming modality and connection differences, CoE provides a powerful and flexible model for capturing the intricate structures of real-world complex data. In our framework, dedicated encoders act as domain-specific experts, each specializing in learning distinct relational patterns in specific semantic spaces. To enhance robustness and extract complementary knowledge, these experts collaborate through a novel large margin mechanism supported by a tailored optimization strategy. Rigorous theoretical analyses guarantee the framework's feasibility and stability, while extensive experiments across diverse benchmarks demonstrate its superior performance and broad applicability. Our code is available at https://github.com/strangeAlan/CoE.