Published several papers in the field of NLP, including 'Atomic Reasoning for Scientific Table Claim Verification', 'The Law of Knowledge Overshading: Towards Understanding, Predicting, and Preventing LLM Hallucination', and more. For a full list, see his Google Scholar profile.
Research Experience
Works as a postdoc researcher at UIUC, focusing on NLP, LLM theoretical interpretation, and trustworthiness. Organized and participated in various workshops, talks, and tutorials at conferences like ACL 2025, AAAI 2025, etc.
Education
Postdoc, University of Illinois at Urbana-Champaign (UIUC), supervised by Prof. Heng Ji and Prof. Chengxiang Zhai
Background
Research interests include Natural Language Processing (NLP), theoretical interpretation of large language models (LLMs), and trustworthy LLMs. Focuses on understanding the knowledge mechanisms of LLMs, such as how they acquire, store, represent, and utilize knowledge, and leveraging these insights to enhance model reliability and performance. Specifically interested in: 1. Interpreting, predicting, and preventing hallucination through the lens of knowledge interaction; 2. Updating knowledge while preserving model robustness and reliability; 3. Improving knowledge acquisition mechanisms to boost model intelligence.
Miscellany
Invited to give talks at institutions such as the Chinese Academy of Sciences, University of Texas at Austin, among others, covering topics related to the impact of knowledge overshadowing on large language models and potential solutions.