Nima Fazeli
Scholar

Nima Fazeli

Google Scholar ID: fE3FSqIAAAAJ
Asst. Prof. of Robotics, CS, ME -- University of Michigan
Robotic ManipulationControlsMachine LearningContactArtificial Intelligence
Citations & Impact
All-time
Citations
1,913
 
H-index
17
 
i10-index
29
 
Publications
20
 
Co-authors
46
list available
Resume (English only)
Academic Achievements
  • Recognized with the NSF CAREER Award, support from the National Robotics Initiative and NSF Advanced Manufacturing, and the Rohsenow Fellowship. Research has been featured in major media outlets including The New York Times, CBS, CNN, and BBC.
Research Experience
  • Assistant Professor of Robotics, Computer Science (EECS), and Mechanical Engineering at the University of Michigan, and an Amazon Scholar with Amazon Robotics. Leads the Manipulation and Machine Intelligence (MMint) Lab, focusing on intelligent and dexterous robotic manipulation through advances in sensing, learning, and control. Research program focuses on fundamental enabling technologies for a diverse range of applications including automation, manufacturing, logistics, in-home/assistive robotics, surgical systems, and space robotics.
Education
  • PhD, 2019 — Massachusetts Institute of Technology (Advisor: Prof. Alberto Rodriguez); MSc, 2014 — University of Maryland, College Park; BSc, 2011 — Amirkabir University of Technology.
Background
  • Research interests include robotic manipulation and embodied intelligence, enabling robotic systems to autonomously and dexterously interact with the physical world. Focuses on modeling, representation learning, perception, and planning through contact. Long-term objective is to develop systems that interact with the physical world autonomously, safely, and gracefully. Currently most interested in multi-modal (e.g., visuo-tactile) representation learning, model-based reasoning, and planning for robotic systems in uncertain environments.
Miscellany
  • Very excited about tactile sensing, how robots should interpret touch, and how they should build models of the world through vision and touch.