Yongjie Wang
Scholar

Yongjie Wang

Google Scholar ID: 9hYmQ1MAAAAJ
Nanyang Technological University
Explainable AIInterpretabilityMachine LearningTrustworthy AI
Citations & Impact
All-time
Citations
197
 
H-index
7
 
i10-index
5
 
Publications
18
 
Co-authors
8
list available
Resume (English only)
Academic Achievements
  • A Survey on Natural Language Counterfactual Generation, Findings of EMNLP 2024 (*equal contribution).
  • PairCFR: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning, ACL 2024 Main (*equal contribution).
  • Hybrid Multimodal Fusion for Graph Learning in Disease Prediction, Elsevier Method 2024.
  • Gradient based Feature Attribution in Explainable AI: A Technical Review, Arxiv Preprint 2024.
  • Explaining Language Models' Predictions with High-Impact Concepts, Findings of EACL 2024.
  • Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations, CIKM 2023.
  • PhD Thesis: Counterfactual Explanations for Machine Learning Models on Heterogeneous Data, Nanyang Technological University, 2023.
  • Summarizing User-Item Matrix By Group Utility Maximization, ACM TKDD 2023 (extension of ICDM 2021).
  • DualCF: Efficient Model Extraction Attack from Counterfactual Explanations, FAccT 2022.
  • The Skyline of Counterfactual Explanations for Machine Learning Decision Models, CIKM 2021.
  • Summarizing User-Item Matrix By Group Utility Maximization, ICDM 2021.
Background
  • Currently a research staff member at the Joint NTU-WeBank Research Centre, Nanyang Technological University, supervised by Dr. Shen Zhiqi.
  • Research interests include: developing trustworthy AI by integrating various explanation techniques to enhance model trustworthiness and robustness; applying modern models (e.g., LLMs) to high-stakes applications.
  • Exploring the philosophy of explainable AI from multiple disciplines such as causality, psychology, and social science.
  • Investigating effective retrieval-augmented generation to mitigate LLM hallucination.
  • Leveraging counterfactual explanations into learning paradigms.
  • Studying and probing modern Large Language Models (LLMs), e.g., how in-context learning enhances their capabilities.
  • Concept-level explanation—understanding high-level representations in LLMs.