Zhanghao Hu
Scholar

Zhanghao Hu

Google Scholar ID: trDOsRsAAAAJ
School of Informatics, King's College London
Natural Language ProcessingArtificial IntelligenceMulti-modal Processing
Citations & Impact
All-time
Citations
59
 
H-index
2
 
i10-index
2
 
Publications
6
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • - Paper 'Spectrum Projection Score: Aligning Retrieved Summaries with Reader Models in Retrieval-Augmented Generation' accepted by AAAI 2026 Oral
  • - Paper 'CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation' accepted by EMNLP 2025 Main
  • - Paper 'Human motion video generation: A survey' accepted by TPAMI 2025
  • - Paper 'Beyond Prompting: An Efficient Embedding Framework for Open-Domain Question Answering' accepted by ACL 2025 Main
  • - Paper 'Causal and Temporal Inference in Visual Question Generation by Utilizing Pre-trained Models' accepted by ACL ALVR 2024
  • - Paper 'Exploring Effective and Efficient Question-Answer Representations' accepted by COLING 2024
  • - Paper 'EEE-QA: Exploring Effective and Efficient Question-Answer Representations' accepted by AAAI 2024 DEPLOYABLE AI
Research Experience
  • - Interned as a full-stack engineer specializing in voice cloning algorithms at 01.AI for four months
Education
  • - PhD: King's College London, School of Informatics, NLP group, supervised by Dr. Lin Gui and Prof. Yulan He, starting from October 2024
  • - MSc: University of Edinburgh, AI, supervised by Prof. Frank Keller
  • - BEng: Joint program between North China Electric Power University (NCEPU) and University of Edinburgh, EEE, supervised by Dr. Jiabin Jia
Background
  • - Currently a second-year PhD student at King's College London, NLP group, School of Informatics
  • - Research interests lie in the intersection of Natural Language Processing, Retrieval-Augmented Generation (RAG), latent representation learning, and multi-modal understanding
  • - Long-term vision is to build principled AI systems that unify retrieval, reasoning, and multi-modal understanding into trustworthy, efficient, and human-aligned communicative agents
Miscellany
  • - Always open to new collaborations and engaging discussions