🤖 AI Summary
To address the challenges of ambiguous user intents and the absence of explicit supervision in implicit feedback—limiting both recommendation accuracy and interpretability—this paper proposes DMICF, a Dual-View Intent-Decoupled Collaborative Filtering framework. DMICF jointly models the user-item interaction graph via dual-view graph neural encoding and employs a sub-intent local alignment mechanism to achieve fine-grained intent disentanglement. It introduces an intent-aware scoring function and enhances robustness and interpretability in long-tail scenarios through multi-negative sampling contrastive learning and softmax-based semantic alignment. Extensive experiments on multiple public benchmarks demonstrate that DMICF significantly outperforms state-of-the-art methods in Recall@K and NDCG@K, especially under data sparsity and long-tail distributions. The framework achieves superior generalization while simultaneously delivering high recommendation performance and transparent, interpretable intent representations.
📝 Abstract
Disentangling user intentions from implicit feedback has become a promising strategy to enhance recommendation accuracy and interpretability. Prior methods often model intentions independently and lack explicit supervision, thus failing to capture the joint semantics that drive user-item interactions. To address these limitations, we propose DMICF, a unified framework that explicitly models interaction-level intent alignment while leveraging structural signals from both user and item perspectives. DMICF adopts a dual-view architecture that jointly encodes user-item interaction graphs from both sides, enabling bidirectional information fusion. This design enhances robustness under data sparsity by allowing the structural redundancy of one view to compensate for the limitations of the other. To model fine-grained user-item compatibility, DMICF introduces an intent interaction encoder that performs sub-intent alignment within each view, uncovering shared semantic structures that underlie user decisions. This localized alignment enables adaptive refinement of intent embeddings based on interaction context, thus improving the model's generalization and expressiveness, particularly in long-tail scenarios. Furthermore, DMICF integrates an intent-aware scoring mechanism that aggregates compatibility signals from matched intent pairs across user and item subspaces, enabling personalized prediction grounded in semantic congruence rather than entangled representations. To facilitate semantic disentanglement, we design a discriminative training signal via multi-negative sampling and softmax normalization, which pulls together semantically aligned intent pairs while pushing apart irrelevant or noisy ones. Extensive experiments demonstrate that DMICF consistently delivers robust performance across datasets with diverse interaction distributions.