Yixiang Qiu
Scholar

Yixiang Qiu

Google Scholar ID: kxotrxgAAAAJ
Tsinghua Shenzhen International Graduate School
Trusuworthy AIComputer VisionDeep Learning
Citations & Impact
All-time
Citations
101
 
H-index
4
 
i10-index
3
 
Publications
10
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • Publications:
  • - 'Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors', EMNLP 2025
  • - 'ICAS: Detecting Training Data from Autoregressive Image Generative Models', ACM MM 2025
  • - 'Stealthy Shield Defense: A Conditional Mutual Information-Based Approach against Black-Box Model Inversion Attacks', ICLR 2025
  • - 'A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks', ECCV 2024
  • Reviewer for: WCSP 2024, ICLR 2025
  • Awards:
  • - First class scholarship × 2, Second class scholarship × 1, Harbin Institute of Technology, Shenzhen, 2023.09
  • - Fang Binxing Academician Scholarship, Harbin Institute of Technology, Shenzhen, 2023.03
  • - Chinese National Scholarship for Undergraduate Students (Top 1.5%), Harbin Institute of Technology, Shenzhen, 2022.12
  • - Provincial First Prize in the China Undergraduate Mathematical Contest in Modeling (CUMCM), 2022.09
  • - National Third Prize in the National Student Computer System Capability Challenge (NSCSCC), 2022.08
  • - Provincial First Prize in the Chinese Mathematics Competitions (CMC), 2021.12
Research Experience
  • Conducting research in the ITML lab, focusing on the security of multi-modal large language models.
Education
  • Received B.S. in Computer Science and Technology from Harbin Institute of Technology, Shenzhen in 2024; currently a second-year master's student in the ITML lab at Tsinghua Shenzhen International Graduate School, supervised by Prof. Bin Chen and Prof. Shu-Tao Xia.
Background
  • Research interests include Trustworthy AI, Generative Models, Computer Vision, and Deep Learning. Currently, working on the security for multi-modal large language models.