Mitsuhiko Nakamoto
Scholar

Mitsuhiko Nakamoto

Google Scholar ID: wIDVzroAAAAJ
UC Berkeley
Deep LearningReinforcement LearningRobot Learning
Citations & Impact
All-time
Citations
675
 
H-index
10
 
i10-index
10
 
Publications
13
 
Co-authors
3
list available
Resume (English only)
Academic Achievements
  • Papers published: 'Steering Your Diffusion Policy with Latent Space Reinforcement Learning' (CoRL 2025, oral presentation); 'Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance' (CoRL 2024); 'SuSIE: Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models' (ICLR 2024); 'Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning' (NeurIPS 2023).
Research Experience
  • Currently a third-year CS Ph.D. student at the BAIR Lab, UC Berkeley; spent a summer in Professor Yutaka Matsuo's Lab before moving to Berkeley; a research intern at TRI in Cambridge, MA during the summer of 2025.
Education
  • Ph.D.: University of California, Berkeley, Computer Science, advised by Professor Sergey Levine; B.S.: University of Tokyo, graduated in March 2022, advised by Professor Yoshimasa Tsuruoka.
Background
  • Research interests: data-driven approaches to solving real-world robotic tasks, including offline reinforcement learning (RL), online RL fine-tuning, and imitation learning. Summary: The goal is to develop algorithms that empower robots with both high-level dexterity and generalization capabilities, and to incorporate them into everyday life.
Miscellany
  • Contact: Email, Google Scholar, GitHub, Twitter; Personal interests not mentioned.