Erfan Shayegani
Scholar

Erfan Shayegani

Google Scholar ID: G9pIW1AAAAAJ
Ph.D. student at University of California, Riverside
Natural Language ProcessingAlignmentAI SafetyVision and LanguageMachine Learning
Citations & Impact
All-time
Citations
621
 
H-index
7
 
i10-index
7
 
Publications
17
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • Paper 'Just Do It!? Computer-Use Agents Exhibit Blind Goal-Directedness' released and featured in HuggingFace Top Daily Papers
  • 'Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models' accepted as Spotlight (top 2.6% of 12,107 submissions) at ICML 2025
  • Co-first-authored paper 'Textual Unlearning' addressing cross-modality safety alignment accepted at EMNLP 2024 Findings
  • Awarded 'Outstanding Teaching Award' by UCR CS Department in June 2024
  • Work cited in the 'International Scientific Report on the Safety of Advanced AI'
  • Delivered a 3-hour tutorial on 'AI Safety and Adversarial Attacks' at ACL 2024
  • Served as reviewer for ICLR 2025 and ICLR 2026
  • Reviewed for NextGenAISafety 2024 workshop at ICML 2024
Background
  • 4th-year PhD student in Computer Science at UC Riverside
  • Research focuses on the intersection of Generative AI and trustworthiness, especially Multimodal Language Models (LLMs/MLLMs) and Computer-Use Agents (CUAs)
  • Emphasizes Alignment, Robustness, Safety, Ethics, Fairness, Bias, and Security/Privacy
  • Deeply interested in Multimodal Understanding, Reasoning, Retrieval, Expert Specialization, Personalization, and Multilingual MLLMs
  • Explores novel Evaluation methods, Reward Modeling, and Post-Training Algorithms (e.g., Machine Unlearning, RL-based approaches) for adaptive, steerable, contextually aligned AI agents
  • Works on integrating AR/VR and Mixed Reality (MR) with AI Agents
  • Enjoys probing models from an adversarial perspective to expose alignment gaps as a fast path toward safer, more robust systems