Publications: MixupExplainer (KDD’23), ProxyExplainer (ICML’24), RegExplainer (NeurIPS’24), and ConfExplainer (KDD’25); Reviewer for KDD, ICML, ICLR, NeurIPS, AISTATS, WACV, PKDD and AAAI PC member; ACM Professional Member (2025–2026).
Research Experience
Currently a Research Scientist at TikTok, working on vision–language models. Previously, a Senior Data Scientist at Walmart Global Tech (recommender systems and knowledge graphs) and an Applied Scientist Intern at Amazon (model retraining under distribution shift).
Education
Ph.D. in Informatics from the New Jersey Institute of Technology (NJIT), May 2025, (co-)supervised by Prof. Hua Wei@ASU and Prof. Michael Lee@NJIT; B.S. in Computer Science (Honored Science Program) from Xi’an Jiaotong University, 2020, with study experiences at the University of Minnesota Twin Cities and the University of Alberta.
Background
Research interests include trustworthy & explainable AI for Graph Neural Networks (e.g., information bottleneck and confidence-aware explanations), deep learning (LLMs/VLMs) for video moderation, and NLP for code intelligence. Representative works include MixupExplainer (KDD’23), ProxyExplainer (ICML’24), RegExplainer (NeurIPS’24), and ConfExplainer (KDD’25).
Miscellany
Actively welcome research collaborations across areas including (but not limited to): GNNs and their applications/explainability, LLM/VLM explainability and architecture optimization, training strategy optimization, video compression, and ML infrastructure. Also open to working with motivated, well-prepared interns.