Mobile Robot Navigation Using Hand-Drawn Maps: A Vision Language Model Approach, IEEE Robotics and Automation Letters, 2025 + ICRA 2026
X-Nav: Learning End-to-End Cross-Embodiment Navigation for Mobile Robots, Under Review at RAL, 2025
4CNet: A Diffusion Approach to Map Prediction for Decentralized Multi-Robot Exploration, Under Revision at T-RO, 2025
MLLM-Search: A Zero-Shot Approach to Finding People using Multimodal Large Language Models, Under Review at RAL, 2024
OLiVia-Nav: An Online Lifelong Vision Language Approach for Mobile Robot Social Navigation, CoRL Workshop: Lifelong Learning for Home Robots (Spotlight Presentation), 2024, ICRA 2025 (Accepted)
Find Everything: A General Vision Language Model Approach to Multi-Object Search, CoRL Workshop: Language and Robot Learning, 2024
NavFormer: A Transformer Architecture for Robot Target-Driven Navigation in Unknown and Dynamic Environments, IEEE Robotics and Automation Letters, 2024 + ICRA 2025
Research Experience
Worked at Syncere; Previously a postdoctoral researcher at Stanford University.
Education
PhD from the University of Toronto; Postdoctoral researcher at Stanford University.
Background
Currently building robotic lamps at Syncere. Research interests include developing robots for human-centric environments, to enable natural human interaction, advanced reasoning, and knowledge sharing among robots.