Xiaotian Ye
Scholar

Xiaotian Ye

Google Scholar ID: F0tij1oAAAAJ
Beijing University of Posts and Telecommunications
Natural Language ProcessingKnowledge RepresentationLarge Language Models
Citations & Impact
All-time
Citations
53
 
H-index
3
 
i10-index
2
 
Publications
6
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Paper 'LLM Unlearning Should Be Form-Independent' accepted to IEEE Symposium on Security and Privacy (S&P) 2026
  • Paper 'Uncovering Overfitting in Large Language Model Editing' accepted as Spotlight at ICLR 2025
  • Two papers accepted to EMNLP 2025
  • Paper 'Knowledge Graph Enhanced Large Language Model Editing' accepted to EMNLP 2024 Main Conference
  • Recipient of CCF Elite Collegiate Award (2025)
  • Silver medalist at ICPC Asia Regional Contest (2023)
Background
  • Senior undergraduate student in Computer Science at Beijing University of Posts and Telecommunications (BUPT)
  • Research intern at NLPR/MAIS, Institute of Automation, Chinese Academy of Sciences (CASIA)
  • Research focuses on the intersection of knowledge mechanisms and the safety & trustworthiness of foundation models (LLMs/VLMs)
  • Broadly interested in interpretability, knowledge editing, unlearning, alignment, and agentic safety
  • Passionate about developing reliable and trustworthy AI systems for real-world applications
Co-authors
0 total
Co-authors: 0 (list not available)