Behzad Khamidehi
Scholar

Behzad Khamidehi

Google Scholar ID: ubjLs70AAAAJ
University of Toronto
Deep LearningOptimizationReinforcement Learning
Citations & Impact
All-time
Citations
263
 
H-index
9
 
i10-index
9
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • 1. [Jul. 2022] Our paper 'Dynamic Resource Management for Providing QoS in Drone Delivery Systems' is accepted to IEEE International Conference on Intelligent Transportation (ITSC 2022).
  • 2. [Jan. 2022] Our paper 'Reinforcement Learning-aided Safe Planning for Aerial Robots to Collect Data in Dynamic Environments' is accepted to IEEE Internet of Things Journal.
  • 3. [Jun. 2021] Our paper 'Distributed Deep Reinforcement Learning for Intelligent Traffic Monitoring with a Network of Aerial Robots' is accepted to IEEE International Conference on Intelligent Transportation (ITSC 2021).
  • 4. [Dec. 2020] Our paper 'Trajectory Design for the Aerial Base Stations to Improve Cellular Network Performance' is accepted to IEEE Transactions on Vehicular Technology.
  • 5. [Aug. 2020] Our paper 'Federated Learning for Cellular-connected UAVs: Radio Mapping and Path Planning' is accepted to IEEE GLOBECOM 2020.
  • 6. [Jul. 2020] Our paper 'No Cross-Validation Required: An Analytical Framework for Regularized Mixed-Integer Problems' is accepted to IEEE Communication Letters.
  • 7. [Jan. 2020] Our paper 'A Double Q-Learning Approach for Navigation of Aerial Vehicles with Connectivity Constraint' is accepted to IEEE ICC 2020.
Research Experience
  • 1. Path Planning and Fleet Management for UAVs
  • 2. Reinforcement Learning-aided Safe Planning for Aerial Robots to Collect Data in Dynamic Environments
  • 3. Dynamic Resource Management for Providing QoS in Drone Delivery Systems
  • 4. Distributed Deep Reinforcement Learning for Intelligent Traffic Monitoring with a Network of Aerial Robots
Background
  • A research associate in the Electrical and Computer Engineering Department at the University of Toronto, Canada. Research interests include Fleet Management and Multi-Agent Path Planning for Autonomous Robots, Sequential Decision-Making, Reinforcement Learning, and Optimization Theory.