Hang Wang
Scholar

Hang Wang

Google Scholar ID: Xdb3u_q3RKwC
Microsoft
Reinforcement LearningAI AgentWorld ModelDistributed OptimizationDomain Adaptation
Citations & Impact
All-time
Citations
247
 
H-index
6
 
i10-index
3
 
Publications
20
 
Co-authors
11
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Joining Microsoft as a Researcher in Redmond, WA in 2025; Book “Continual and Reinforcement Learning for Edge AI: Framework, Foundation, and Algorithm Design” to be published by Springer in June 2025; Paper “Heterogeneous Decision Making: When Uncertainty-aware Planning Meets Bounded Rationality” accepted by CPAL; Paper “AdaWM: Adaptive World Model based Planning for Autonomous Driving” accepted by ICLR 2025; Open-source platform CarDreamer now available; Paper “L-MBOP-E: Latent-Model Based Offline Planning with Extrinsic Policy Guided Exploration” accepted by IEEE International Conference on Mobility: Operations, Services, and Technologies (MOST); Paper “Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap” accepted by ICML 2023 (Oral, top 5%); Paper “Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback” accepted by NeurIPS 2022; Paper “Distributed Q-Learning with State Tracking for Multi-agent Networked Control” accepted
Research Experience
  • Research Intern, Bosch Center of Artificial Intelligence, Sunnyvale, California, USA, Jun 2024- Dec 2024; Research Graduate Intern, Intel, Chandler, Arizona, USA, May 2022 - Aug 2022; Research Engineer, Nanyang Technological University, Singapore, Sep 2018 - Sep 2019; Research Intern, Twente University, Netherlands, July 2017 - Oct 2017; Intern, Sensetime, July 2018 - Sep 2018; Research Associate, HI Lab, USTC, June 2016 - Nov 2017
Education
  • Ph.D. student in Electrical Engineering, UC Davis, 2019-2025 (Advisor: Prof. Junshan Zhang, Co-advisor: Prof. Yubei Chen); B.Eng (Talent Honor) in Automation, USTC, 2014-2018
Background
  • Research focuses on developing AI agents that learn and continually evolve through direct interaction with the physical world. Specific research areas include Warm-start Reinforcement Learning and Self-supervised Learning, Multi-agent Reinforcement Learning (distributed optimization), and Foundation models (with particular emphasis on World Models). His work spans both theoretical innovations and practical algorithmic implementations, applied in autonomous driving, robots, Internet of Things (IoT), and edge computing.
Miscellany
  • Enjoys interdisciplinary research, such as the use of RL in optical physics (with Prof. Munday), smart grid, and biomedical fields.