- Presented work on uncovering where multi-task learning happens in instruction-tuned LLMs at EMNLP 2024
- Presented papers on temporal grounding and context understanding in LLMs at NAACL 2024
- Presented two papers on multilingual summarization and multilingual representation analysis at EMNLP 2023
- Published a paper titled 'Verifying Chain-of-Thought Reasoning via Its Computational Graph' on arXiv, proposing Circuit-based Reasoning Verification (CRV) to detect and correct errors in chain-of-thought reasoning
Research Experience
- Meta FAIR, Research Scientist Intern, May 2025 - Oct. 2025
- Goldman Sachs, Summer Analyst, Jun. 2018 - Aug. 2018
Education
- University of Edinburgh, PhD in Natural Language Processing, Sep. 2020 - Present, Supervisors: Shay Cohen, Bonnie Webber
- University of Edinburgh, Masters by Research in Natural Language Processing, Sep. 2019 - Aug. 2020
- University of Edinburgh, BEng in Artificial Intelligence and Software Engineering, Sep. 2015 - Jul. 2019
Background
- A final-year PhD candidate in Natural Language Processing at the UKRI Centre for Doctoral Training, University of Edinburgh
- Research interests: Uncovering and understanding the inner workings of large language models (LLMs), including domain learning, multilingual representation analysis, instruction tuning, and multi-task learning
- Affiliated with the Institute for Language, Cognition and Computation (ILCC) in the School of Informatics, The Cohort, and EdinburghNLP research groups
Miscellany
- Contact: zheng.zhao@ed.ac.uk
- Social media/academic profiles: Google Scholar, Semantic Scholar, GitHub, Twitter, LinkedIn, ORCID