Andrea Zanette
Scholar

Andrea Zanette

Google Scholar ID: M7y1dj8AAAAJ
Assistant Professor, Carnegie Mellon University
Foundation ModelsArtificial IntelligenceMachine LearningReinforcement Learning
Citations & Impact
All-time
Citations
1,713
 
H-index
16
 
i10-index
19
 
Publications
20
 
Co-authors
28
list available
Resume (English only)
Academic Achievements
  • Publications:
  • - Can Large Reasoning Models Self-Train? (NeurIPS 2025)
  • - Training Language Models to Reason Efficiently (NeurIPS 2025)
  • - Accelerating Unbiased LLM Evaluation via Synthetic Feedback (ICML 2025)
  • - Fast Best-of-N Decoding via Speculative Rejection (NeurIPS 2024)
  • - ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL (ICML 2024)
  • - Is Offline Decision Making Possible with Only Few Samples? (Not specified)
  • - Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data (NeurIPS 2023)
  • - When is Realizability Sufficient for Off-Policy Reinforcement Learning? (ICML 2023)
  • - Bellman Residual Orthogonalization for Offline Reinforcement Learning (Not specified)
Research Experience
  • Assistant Professor at Carnegie Mellon University in the ECE department with a courtesy appointment in the MLD. Previously, a postdoctoral scholar at UC Berkeley.
Education
  • PhD: Stanford University, supervised by Emma Brunskill and Mykel J. Kochenderfer; Postdoc: UC Berkeley, collaborated with Martin Wainwright, Peter Bartlett, and Sergey Levine.
Background
  • Broadly interested in Foundation Models, from theory to practice. Topics of interest include reasoning, alignment, efficiency, and optimization, among others.
Miscellany
  • Actively looking for strong and motivated PhD students to join the group. Can supervise one Postdoc starting in Fall 2026 for two years under the Carnegie Bosch Institute fellowship. Welcomes remote interns and visitors.