Lei Hsiung
Scholar

Lei Hsiung

Google Scholar ID: CJaZ2NcAAAAJ
Dartmouth College
AI SafetyTrustworthy Machine Learning
Citations & Impact
All-time
Citations
116
 
H-index
5
 
i10-index
3
 
Publications
12
 
Co-authors
8
list available
Resume (English only)
Academic Achievements
  • Paper 'NeuralFuse' accepted to NeurIPS 2024
  • Paper 'AutoVP' accepted to ICLR 2024 (acceptance rate: 30.98%)
  • Paper 'Data Debiasing via Model-free Data Pruning' accepted to ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models
  • Recipient of ICLR 2024 Scholar Award
  • Paper 'Spectral Insights into Data-Oblivious Critical Layers in Large Language Models' accepted to Findings of ACL 2025
  • Paper 'Why LLM Safety Guardrails Collapse After Fine-tuning' under submission
  • Selected for OpenAI Researcher Access Program (June 2025)
  • Served as reviewer for NeurIPS 2024/2025, ICML 2025, ICLR 2024, AAAI 2025
Background
  • Second-year Ph.D. student in Computer Science at Dartmouth College
  • Research lies at the intersection of trustworthy and efficient machine learning
  • Focuses on building safer and more reliable machine learning models
  • Research topics include safety alignment, adversarial robustness, model reprogramming, and energy-efficient inference
  • Aims to advance the safety and reliability of ML systems as a foundation for robust and secure artificial general intelligence