HetGCoT-Rec: Heterogeneous Graph-Enhanced Chain-of-Thought LLM Reasoning for Journal Recommendation

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the academic journal recommendation task by proposing a novel method that jointly models heterogeneous graph structures and leverages large language models (LLMs) for interpretable reasoning. To overcome the dual bottlenecks of accuracy and interpretability in existing recommender systems, we design a structure-aware subgraph-to-natural-language context translation mechanism and develop a graph-enhanced, multi-step chain-of-thought (CoT) reasoning framework, enabling deep synergy between graph neural networks (GNNs) and LLMs. Our approach is built upon heterogeneous Graph Transformers and meta-path modeling, ensuring compatibility with diverse LLM architectures. Evaluated on the OpenAlex dataset, it achieves a Hit Rate of 96.48% and H@1 accuracy of 92.21%, significantly outperforming state-of-the-art baselines. Moreover, it generates high-quality, structurally traceable natural language explanations, bridging structural reasoning with human-understandable justifications.

Technology Category

Application Category

📝 Abstract
Academic journal recommendation requires effectively combining structural understanding of scholarly networks with interpretable recommendations. While graph neural networks (GNNs) and large language models (LLMs) excel in their respective domains, current approaches often fail to achieve true integration at the reasoning level. We propose HetGCoT-Rec, a framework that deeply integrates heterogeneous graph transformer with LLMs through chain-of-thought reasoning. Our framework features two key technical innovations: (1) a structure-aware mechanism that transforms heterogeneous graph neural network learned subgraph information into natural language contexts, utilizing predefined metapaths to capture academic relationships, and (2) a multi-step reasoning strategy that systematically embeds graph-derived contexts into the LLM's stage-wise reasoning process. Experiments on a dataset collected from OpenAlex demonstrate that our approach significantly outperforms baseline methods, achieving 96.48% Hit rate and 92.21% H@1 accuracy. Furthermore, we validate the framework's adaptability across different LLM architectures, showing consistent improvements in both recommendation accuracy and explanation quality. Our work demonstrates an effective approach for combining graph-structured reasoning with language models for interpretable academic venue recommendations.
Problem

Research questions and friction points this paper is trying to address.

Academic Journal Recommendation
Graph Neural Networks
Explainable Recommendation Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

HetGCoT-Rec
Heterogeneous Graph Neural Networks
Large Language Model Integration
🔎 Similar Papers
No similar papers found.
R
Runsong Jia
University of Technology Sydney, Sydney, Australia
Mengjia Wu
Mengjia Wu
University of Technology Sydney
BibliometricsText miningNetwork analytics
Ying Ding
Ying Ding
Bill & Lewis Suit Professor, School of Information, Dell Med, University of Texas at Austin
AI in HealthKnowledge GraphScience of Science
J
Jie Lu
University of Technology Sydney, Sydney, Australia
Y
Yi Zhang
University of Technology Sydney, Sydney, Australia