Xuanli He
Scholar

Xuanli He

Google Scholar ID: TU8t0iAAAAAJ
UCL
Natural Language ProcessingAI SafetyMachine Learning
Citations & Impact
All-time
Citations
2,170
 
H-index
23
 
i10-index
36
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • September 2024 - Paper: “Generative Models are Self-Watermarked: Declaring Model Authentication through Re-Generation” accepted by TMLR 2024.
  • August 2024 - Paper: Will be presenting a tutorial titled “A Copyright War: Authentication for Large Language Models” at IJCAI 2024.
  • May 2024 - Paper: 2 papers accepted at ACL (1x)/ACL Findings (1x) 2024.
  • April 2024 - Paper: “SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks” accepted by TACL 2024.
  • March 2024 - Paper: 2 papers accepted at NAACL 2024.
  • Other published papers include: 'Using Natural Language Explanations to Improve Robustness of In-context Learning', 'Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation', etc.
Research Experience
  • Current Research Fellow at the NLP group, University College London.
Education
  • Ph.D. from Monash University (Australia), supervised by Prof. Reza Haffari and Dr. Mohammad Norouzi.
Background
  • Research interests: The intersection between deep learning and natural language processing, with an emphasis on security and robustness in NLP models.