Jiacheng Liang
Scholar

Jiacheng Liang

Google Scholar ID: Qsp7ts0AAAAJ
Stony Brook University
LLM SecurityLLM Optimization
Citations & Impact
All-time
Citations
148
 
H-index
7
 
i10-index
6
 
Publications
17
 
Co-authors
11
list available
Resume (English only)
Academic Achievements
  • Published multiple papers including 'GraphRAG under Fire' (IEEE S&P’26), 'AutoRAN: Weak-to-Strong Jailbreaking of Large Reasoning Models' (EMNLP’25), 'WaterPark: A Robustness Assessment of Language Model Watermarking' (EMNLP’25), 'RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction' (ICLR’25), 'Model Extraction Attacks Revisited' (Asia CCS’24), 'Data to Defense: The Role of Curation in Customizing LLMs Against Jailbreaking Attacks' (EMNLP’25). Also served as a program committee member or reviewer for several international conferences and technical journals.
Research Experience
  • Applied Scientist Intern at Amazon AGI Foundation - Responsible AI, Boston, from May 2025 to October 2025. Advisors: Dr. Charith Peris, Dr. Yao Ma.
Education
  • Ph.D. candidate at Stony Brook University, ALPS Lab, supervised by Dr. Ting Wang.
Background
  • Research interests include ensuring the safety and trustworthiness of large language models (LLMs), identifying security challenges, and developing defensive strategies to protect these models from adversarial threats. Specific work involves investigating vulnerabilities in LLM watermarking and GraphRAG, jailbreak reasoning model, and proposing advanced methods to address these weaknesses. Additionally, has extensive expertise and a strong interest in post-training, prompt engineering, inference optimization, LLM agents, and ensuring LLM alignment.
Miscellany
  • Website powered by Jekyll and Minimal Light theme.