Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm (arXiv preprint, 2025)
Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)
SST: Multi-Scale Hybrid Mamba-Transformer Experts for Long-Short Range Time Series Forecasting (CIKM 2025)
Who's Your Judge? On the Detectability of LLM-Generated Judgments (arXiv preprint, 2025)
Privacy-Aware Decoding: Mitigating Privacy Leakage of Large Language Models in Retrieval-Augmented Generation (arXiv preprint, 2025)
Can Editing LLMs Inject Harm? (arXiv preprint, 2024)
Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges (ACM SIGKDD Exploration, 2024)
Can Large Language Models Identify Authorship? (EMNLP 2024 Findings)
TAP: A Comprehensive Data Repository for Traffic Accident Prediction in Road Networks (ACM SIGSPATIAL, 2023)
Research Experience
During his PhD studies at Emory University, he has focused on research in model editing and authorship attribution.
Education
PhD student in Computer Science at Emory University, advised by Dr. Kai Shu.
Background
Research interests include improving the factuality, safety, and robustness of foundation models, particularly through model editing. Previously worked on authorship attribution, which aims to identify the author of a text based on their unique writing style.
Miscellany
In his free time, he enjoys spending time in nature, staying active through various outdoor sports like running and swimming, and recently taking up weightlifting. He also finds joy in playing the piano and expanding his reading list.