Co-first authored paper “Do BERT-Like Bidirectional Models Still Perform Better on Text Classification in the Era of LLMs?” accepted by Findings of EMNLP '25
Collaborative work “MSV-PCT” accepted by AAAI '25
Paper “SDformer” accepted by IJCAI '24
Undergraduate representative work “Generic Attention-model Explainability by Weighted Relevance Accumulation” published at MMAsia '23
Accepted into the Red Bird Master of Philosophy Program at HKUST(GZ) in May 2024
Background
Research interests focus on Trustworthy AI
Long-term goal is to analyze and control state-of-the-art AI algorithms, models, and systems
Specifically exploring interpretability, robustness, granularity, and generalizability of AI to address explainability (XAI), safety, privacy, fairness, refined controllability, and human-interactivity
Currently working on mechanistic interpretability for safe AI and societal AI
Actively seeking Ph.D. positions for Fall 2026, Spring 2027, or Fall 2027