🤖 AI Summary
Existing anomaly detection methods for text-rich graphs (e.g., academic citation networks) suffer from three key limitations: (1) neglect of structural bias in graph topology, (2) textual information overwhelming discriminative structural cues, and (3) high computational cost of fine-tuning large language models.
Method: We propose a multimodal structural-enhanced language model that jointly leverages an implicit structural modality—constructed via attribute co-occurrence—and a fine-grained textual modality. Our framework introduces a multi-round, task-guided instruction-tuning paradigm to co-optimize semantic representations and topological features. It integrates structural-enhanced language modeling, graph-structure encoding, fine-grained text-attribute modeling, and multi-task-oriented training.
Contribution/Results: Our method ranked first in KDD Cup 2024, significantly outperforming unimodal semantic-only or graph-only baselines. It enables high-precision automatic identification of author-paper misassignments—detecting over 10% of such errors in the million-scale WhoIsWho benchmark.
📝 Abstract
The rapid growth of academic publications has exacerbated the issue of author name ambiguity in online digital libraries. Despite advances in name disambiguation algorithms, cumulative errors continue to undermine the reliability of academic systems. It is estimated that over 10% paper-author assignments are rectified when constructing the million-scale WhoIsWho benchmark. Existing endeavors to detect incorrect assignments are either semantic-based or graph-based approaches, which fall short of making full use of the rich text attributes of papers and implicit structural features defined via the co-occurrence of paper attributes. To this end, this paper introduces a structure-enhanced language model that combines key structural features from graph-based methods with fine-grained semantic features from rich paper attributes to detect incorrect assignments. The proposed model is trained with a highly effective multi-modal multi-turn instruction tuning framework, which incorporates task-guided instruction tuning, text-attribute modality, and structural modality. Experimental results demonstrate that our model outperforms previous approaches, achieving top performance on the leaderboard of KDD Cup 2024. Our code has been publicly available.