- MoTVLA: A Vision-Language-Action Model with Unified Fast-Slow Reasoning
- Compose by Focus: Scene Graph-based Atomic Skills
- ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow
- Large Language Models as Natural Selector for Embodied Soft Robot Design
- CRITERIA: A New Benchmarking Paradigm to Evaluate Trajectory Prediction Approaches
- Using Upsampling Conv-LSTM for Respiratory Sound Classification
- Learn TAROT with MENTOR: A Meta-Learned Self-Supervised Approach for Trajectory Prediction
Research Experience
1. Latent Space Clustering of Visual Latent Representation (On-going), Advised by Prof. Nima Fazeli and Prof. Heng Yang; 2. Visiting Fellow, Computational Robotics Group (Harvard University), Advised by Prof. Heng Yang; 3. AI-Driven Robotic Structure Evaluation (University of Michigan - HDR Lab), Advised by Prof. Xiaonan (Sean) Huang.
Education
1. Master's Degree: Robotics, University of Michigan, Advisors: Prof. Nima Fazeli and Prof. Xiaonan (Sean) Huang; 2. Bachelor's Degree: Electrical Engineering, University of Toronto, Advisor: Prof. Chan Carusone.
Background
Research Interests: Embodied AI, data-efficient robot learning, and learning from humans, with an emphasis on developing multi-modal visual language models and instruction-based semantic segmentation. Professional Field: Robotics. Bio: Changhe Chen is a master's student in Robotics at the University of Michigan. Prior to joining the University of Michigan, he received his BASc in Electrical Engineering from the University of Toronto, where he worked with Prof. Chan Carusone on reinforcement learning for analog circuit design.
Miscellany
Interned as a research assistant at Huawei's Noah's Ark Lab, developing multi-agent reinforcement learning platforms and advanced trajectory prediction models for autonomous driving.