Zizhao Wang
Scholar

Zizhao Wang

Google Scholar ID: V4KQIWsAAAAJ
UT Austin
reinforcement learning
Citations & Impact
All-time
Citations
530
 
H-index
11
 
i10-index
14
 
Publications
20
 
Co-authors
6
list available
Resume (English only)
Academic Achievements
  • 1. Adversarial Reinforcement Learning for Large Language Model Agent Safety
  • 2. Dyn-O: Building Structured World Models with Object-Centric Representations
  • 3. SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions
  • 4. CaMP: Causal Motion Predictor for Robust Trajectory Forecasting
  • 5. Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning
  • 6. Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning
  • 7. ELDEN: Exploration via Local Dependencies
  • 8. Learning to Correct Mistakes: Backjumping in Long-Horizon Task and Motion Planning
  • 9. Causal Dynamics Learning for Task-Independent State Abstraction
Research Experience
  • Spent time as a research intern at Google, Microsoft, and Honda Research Institute.
Education
  • 1. PhD student in ECE at the University of Texas at Austin, advised by Prof. Peter Stone
  • 2. M.S. in CS at Columbia University, advised by Prof. Peter Allen and Prof. Itsik Pe’er
  • 3. Undergraduate studies at the University of Michigan - Ann Arbor
Background
  • Research interests include reinforcement learning, LLM (agents), world models, and causal reasoning.