Jayjun Lee
Scholar

Jayjun Lee

Google Scholar ID: OhompmUAAAAJ
University of Michigan
Robot LearningRobotic Manipulation
Citations & Impact
All-time
Citations
52
 
H-index
3
 
i10-index
2
 
Publications
7
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • Papers:
  • - AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies, CoRL 2025
  • - ViTaSCOPE: Visuo-tactile Implicit Representation for In-hand Pose and Extrinsic Contact Estimation, RSS 2025
  • - RACER: Rich Language-guided Failure Recovery Policies for Imitation Learning, ICRA 2025
  • - Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities, ICLR 2025
  • - Neural Inverse Source Problems, CoRL 2024
  • Awards:
  • - RACER won the Best Overall Award at UM AI Symposium 2024
  • Organizing:
  • - Co-organizing Human-to-Robot workshop at CoRL 2025
Research Experience
  • Working in the Manipulation and Machine Intelligence (MMINT) Lab at the University of Michigan. Previously worked with Prof. Joyce Chai in the Situated Language and Embodied Dialogue (SLED) Lab.
Education
  • Degree: MS; School: University of Michigan – Ann Arbor; Advisor: Professor Nima Fazeli; Major: Robotics; Year: 2nd year. Bachelor's: Electronic and Information Engineering; School: Imperial College London; Advisor: Professor Ad Spiers; Lab: Manipulation and Touch Lab.
Background
  • Research Interests: Robot learning, robotic manipulation, and spatial intelligence. Bio: Jayjun Lee is a PhD student in Robotics at the University of Michigan. His research focuses on developing algorithms for robots to perceive and interact with the physical world, particularly multi-modal perception (e.g., vision, tactile, F/T, language, audio) and learning representations to acquire contact- and force-rich manipulation skills.
Miscellany
  • Contact: jayjun [at] umich [dot] edu; Links to GitHub, Google Scholar, Twitter, etc. available on personal website.