Paper 'Can Editing LLMs Inject Harm?' accepted to AAAI 2026; Received McCormick School of Engineering Fellowship from Northwestern University and Patronus AI Ph.D. Research Fellowship from Patronus AI; Invited to serve as Session Chair for ACL 2025 and KDD 2025; Recognized as an outstanding reviewer at KDD 2025; Served as Area Chair for NeurIPS 2025 Position Paper Track; Published 'Explainable differential diagnosis with dual-inference large language models' in npj Health Systems.
Research Experience
Graduate visiting researcher at UC Berkeley; Organizer of ResponsibleFM community; Initiator and leader of LLMs Meet Misinformation initiative.
Education
B.S. from University of Chinese Academy of Sciences (UCAS); Currently a CS Ph.D. student at Northwestern University, advised by Prof. Manling Li; Graduate visiting researcher at University of California, Berkeley in summer 2025, hosted by Prof. Dawn Song.
Background
Research Interests: Foundation Agent, Trustworthiness, and Multimodality. Background: Dedicated to advancing socially responsible and trustworthy foundation models (language and multimodal). Started and led the LLMs Meet Misinformation initiative.
Miscellany
WeChat ID: alexccychen; Happy to chat and discuss potential collaborations or give talks about research in related seminars.