Hengrui Jia
Scholar

Hengrui Jia

Google Scholar ID: g2vBgnoAAAAJ
University of Toronto and Vector Institute
Deep LearningAdversarial Machine Learning
Citations & Impact
All-time
Citations
2,275
 
H-index
9
 
i10-index
8
 
Publications
12
 
Co-authors
11
list available
Resume (English only)
Academic Achievements
  • Publications:
  • - Backdoor Detection through Replicated Execution of Outsourced Training
  • - LLM Dataset Inference: Did you train on my dataset?
  • - Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
  • - Proof-of-Learning is Currently More Broken Than You Think
  • - On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning
  • - A Zest of LIME: Towards Architecture-Independent Model Distances
  • - SoK: Machine Learning Governance
  • - Proof-of-Learning: Definitions and Practice
  • - Entangled Watermarks as a Defense against Model Extraction
  • - Machine Unlearning
  • Awards:
  • - Ontario Graduate Scholarship (2024-2025)
  • - Ontario Graduate Scholarship (2023-2024)
  • - Mary H. Beatty Fellowship (2022-2023)
  • - Vector Scholarship in Artificial Intelligence (2020-2021)
  • - Dean’s List (2016-2020)
  • Invited Talks:
  • - Ownership Resolution in ML, Purdue University (2024)
  • - Ownership Resolution in ML, Northwestern University (2024)
  • - Ownership Resolution in ML, University of Wisconsin–Madison (2024)
  • - Ownership of ML Models, Mila - Quebec AI Institute (2024)
  • - Entangled Watermarks as a Defense against Model Extraction, DeepMind (2023)
  • - A Zest of LIME: Towards Architecture-Independent Model Distances, Workshop on Algorithmic Audits of Algorithms (2023)
  • - Entangled Watermarks as a Defense against Model Extraction, Intel (2022)
  • - Machine Unlearning, Vector Institute (2021)
Research Experience
  • Conducting research at the CleverHans Lab.
Education
  • PhD student at the University of Toronto and Vector Institute, advised by Prof. Nicolas Papernot.
Background
  • Research interests: the intersection of security and machine learning, or trustworthy machine learning. Particularly interested in answering questions such as: what risks may come along with the benefits brought by machine learning, who is responsible for such risks, and how can we mitigate them?
Miscellany
  • Contact Information:
  • - Email
  • - Twitter
  • - Bluesky
  • - LinkedIn
  • - Github
  • - Google Scholar