Paper 'AntiLeakBench: Preventing Data Contamination by Automatically Constructing Benchmarks with Updated Real-World Knowledge' accepted to ACL 2025 and received the Senior Area Chair Award.
Paper 'FASTopic: Pretrained Transformer is a Fast, Adaptive, Stable, and Transferable Topic Model' accepted to NeurIPS 2024.
Paper 'AKEW: Assessing Knowledge Editing in the Wild' accepted to EMNLP 2024.
Paper 'Are LLMs Good Zero-shot Fallacy Classifiers?' accepted to EMNLP 2024.
Multiple papers accepted to conferences such as ACL 2025, ICML 2025, NAACL 2025, etc.
Successfully defended Ph.D. thesis.
Research Experience
Serves as a Research Scientist at the College of Computing and Data Science, Nanyang Technological University.
Background
Research interests mainly lie in the area of natural language processing, especially in the efficient reasoning, trustworthiness, and multi-agent collaboration of large language models (LLMs).
Miscellany
Looking for highly self-motivated Ph.D./master/undergraduate students to collaborate on various interesting topics related to LLMs.