🤖 AI Summary
This work addresses the pervasive issue of citation misattribution in academic writing, where cited sources often fail to support—or even contradict—the claims they are meant to substantiate. Existing approaches struggle to capture fine-grained semantic relationships between citation contexts and scholarly network structures. To overcome this limitation, we propose LAGMiD, a novel framework that synergistically integrates the deep semantic reasoning capabilities of large language models (LLMs) with the structural modeling power of graph neural networks (GNNs). LAGMiD introduces an evidence-chain reasoning mechanism for multi-hop citation tracing and leverages knowledge distillation to transfer intermediate reasoning states from the LLM to the GNN. Coupled with a collaborative learning strategy, our method achieves state-of-the-art performance in citation misattribution detection across three real-world academic datasets while significantly reducing inference costs.
📝 Abstract
Scholarly web is a vast network of knowledge connected by citations. However, this system is increasingly compromised by miscitation, where references do not support or even contradict the claims they are cited for. Current miscitation detection methods, which primarily rely on semantic similarity or network anomalies, struggle to capture the nuanced relationship between a citation's context and its place in the wider network. While large language models (LLMs) offer powerful capabilities in semantic reasoning for this task, their deployment is hindered by hallucination risks and high computational costs. In this work, we introduce LLM-Augmented Graph Learning-based Miscitation Detector (LAGMiD), a novel framework that leverages LLMs for deep semantic reasoning over citation graphs and distills this knowledge into graph neural networks (GNNs) for efficient and scalable miscitation detection. Specifically, LAGMiD introduces an evidence-chain reasoning mechanism, which uses chain-of-thought prompting, to perform multi-hop citation tracing and assess semantic fidelity. To reduce LLM inference costs, we design a knowledge distillation method aligning GNN embeddings with intermediate LLM reasoning states. A collaborative learning strategy further routes complex cases to the LLM while optimizing the GNN for structure-based generalization. Experiments on three real-world benchmarks show that LAGMiD achieves state-of-the-art miscitation detection with significantly reduced inference cost.