Meng Song
Scholar

Meng Song

Google Scholar ID: UxuTHlcAAAAJ
PhD Student of Computer Science, University of California, San Diego
Reinforcement LearningSelf-supervised LearningRobot Learning
Citations & Impact
All-time
Citations
787
 
H-index
4
 
i10-index
4
 
Publications
15
 
Co-authors
28
list available
Resume (English only)
Academic Achievements
  • 1. 'Good Actions Succeed, Bad Actions Generalize: A Case Study on Why RL Generalizes Better' - Out-of-Distribution Generalization in Robotics Workshop at RSS, 2025.
  • 2. 'Towards Unsupervised Goal Discovery: Learning Plannable Representations with Probabilistic World Modeling' - PhD Thesis, 2024.
  • 3. 'Probabilistic World Modeling with Asymmetric Distance Measure' - Geometry-grounded Representation Learning and Generative Modeling Workshop at ICML, 2024.
  • 4. 'A Minimalist Prompt for Zero-Shot Policy Learning' - Task Specification Workshop at RSS, 2024.
  • 5. 'RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning' - EMNLP, 2022.
  • 6. 'Learning to Rearrange with Physics-Inspired Risk Awareness' - Conference on Risk Aware Decision Making Workshop at RSS, 2022.
  • 7. 'OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets' - CVPR, 2022.
Research Experience
  • Published papers at multiple international conferences and involved in various research projects, including but not limited to probabilistic world modeling trained with contrastive learning, a novel prompting method for zero-shot policy learning, etc.
Education
  • PhD: UC San Diego, advised by Prof. Manmohan Chandraker; Master's: Robotics Institute, Carnegie Mellon University, working with Prof. Abhinav Gupta and Dr. Daniel Huber.
Background
  • Research Interests: Motivated by the goal of developing a mathematical construct of the intelligent agent from first principles. Recent work has primarily focused on answering the question 'What is a good representation of states and goals in decision-making problems?' explored under three different learning paradigms: reinforcement learning, imitation learning, and unsupervised learning.
Miscellany
  • Contact information includes email, CV link, and social media profiles such as Google Scholar, Twitter, GitHub, and LinkedIn.