Tiansheng Huang
Scholar

Tiansheng Huang

Google Scholar ID: zz6Oq8wAAAAJ
Georgia Institute of Technology
Parallel and Distributed ComputingDistributed machine learningLLM safety
Citations & Impact
All-time
Citations
1,465
 
H-index
20
 
i10-index
27
 
Publications
20
 
Co-authors
16
list available
Resume (English only)
Academic Achievements
  • Received the Google PhD fellowship 2025 with the harmful fine-tuning research proposal. Published multiple papers in top international conferences such as ICCV 2025, ICML 2025 (Oral), ICLR 2025 (Oral), NeurIPS 2024, EMNLP 2024, ECCV 2024, PET 2024, CVPR 2024, WWW 2024, WACV 2024, NeurIPS 2023, TPS 2023, WWW 2023 (short paper), and ICLR 2023.
Research Experience
  • The current research problem is harmful fine-tuning attack/defense for LLMs. Relevant research papers include: Attack: Virus, Alignment stage defense: Vaccine (NeurIPS2024), T-Vaccine (IEEE TIFS), Booster (ICLR2025 (Oral)), CTRAP, Fine-tuning stage defense: Lisa (NeurIPS2024), Post-fine-tuning stage defense: Antidote (ICML2025), Panacea (NeurIPS2025).
Education
  • Currently a fourth-year CS PhD candidate at Georgia Institute of Technology, Atlanta, USA, advised by Prof. Ling Liu. Previously, received B.E./master degree from South China University of Technology, Guangzhou, China, advised by Prof. Weiwei Lin.
Background
  • Research interests include distributed machine learning, parallel and distributed computing, optimization algorithms, and LLM security/safety alignment. The main research focus now is to enhance large language model (LLM) safety, paving the critical path towards artificial general intelligence (AGI).
Miscellany
  • Loves public speaking. Feel free to reach out if you want a talk/discussion on harmful fine-tuning attacks/defenses.