- Paper 'Port-Hamiltonian Neural ODEs on Lie Groups' received the Best Paper Award from the IEEE RAS Technical Committee on Robot Control.
- Paper 'Learning IMU Bias Model for Visual Inertial Odometry' accepted to RA-L.
- Paper 'Physics-Informed Multi-agent Reinforcement Learning' accepted to T-RO.
- Paper 'Port-Hamiltonian Neural ODEs on Lie Groups for Robot Dynamics Learning and Control' received an Honorable Mention of the 2024 T-RO King-Sun Fu Memorial Best Paper Award.
- Workshop 'Fast Motion Planning and Control in the Era of Parallelism' accepted to RSS'25.
- Paper 'Variational Integrator-Based Trajectory Optimization for Legged Robots' accepted to ICRA'25.
- Paper 'Model Learning and Predictive Control for Dynamic Maneuvers on Legged Robots' accepted to RA-L.
- Papers 'Optimal Planning with Large Language Model Guidance' and 'Learning Dynamics from Sensor Observations' accepted to ICRA'24.
- Paper 'Learning Graph Topology' accepted to MRS'23.
- Paper 'Lie Group Forced Variational Integrator Networks' accepted to L4DC'23.
- Paper 'Learning Distributed Multi-Robot Interactions' accepted to ICRA'23.
- Gave a talk on 'Learning and Control of Hamiltonian Dynamics on the SE(3) Manifold' at SIAM MDS'22.
- Paper 'Adaptive Control with Learned Disturbance Features' accepted to L-CSS 2022.
- Paper 'Sparse Bayesian Kernel-based Mapping' accepted to T-RO.
Research Experience
- Postdoctoral Research Associate at Kavraki Lab, Rice University, working with Prof. Lydia Kavraki on task and motion planning since June 2024.
- Previously worked as a software engineer at Microsoft.
Education
- Ph.D.: Department of Electrical and Computer Engineering at University of California, San Diego, under the direction of Prof. Nikolay Atanasov, working on 'Learning Environment and Dynamics Representations for Autonomous Robot Navigation'.
- M.S.: Oregon State University.
- B.S.: Hanoi University of Science and Technology, Hanoi, Vietnam.
Background
Research Interests: robotics, machine learning, control theory, and optimization. My work focuses on robots' understanding of the environments, e.g., probabilistic mapping, navigation, and exploration; and of their own dynamics model, e.g., robot dynamics learning, model-based reinforcement learning, and learning from demonstration. I am also interested in modeling uncertainty in map representations and robots' dynamics for safe and active planning and control.