Paper: 'Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?' (CCF-A), KDD’25.
Paper: 'Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective' (CCF-A), AAAI’25.
Paper: 'Endowing Pre-trained Graph Models with Provable Fairness' (CCF-A), WWW’24.
Paper: 'Data-centric graph learning: A survey', IEEE TBD’24.
Research Experience
2024.06 - 2025.01, China Telecommunications Corporation, China; Research Intern: Financial Risk Service, Mentor: Mengmei Zhang.
Education
PhD Student, Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Chuan Shi.
Background
Research Interests: Trustworthy graph machine learning and large language models; Current research primarily focuses on the development of graph foundation models.