Scholar
Siliang Zeng
Google Scholar ID: IfqsDyYAAAAJ
University of Minnesota
Alignment
Agents
Reinforcement Learning
Foundation Models
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
444
H-index
11
i10-index
11
Publications
20
Co-authors
2
list available
Contact
No contact links provided.
Publications
5 items
Aligning Frozen LLMs by Reinforcement Learning: An Iterative Reweight-then-Optimize Approach
2025
Cited
0
Reinforcing Multi-Turn Reasoning in LLM Agents via Turn-Level Credit Assignment
2025
Cited
0
Understanding Inverse Reinforcement Learning under Overparameterization: Non-Asymptotic Analysis and Global Optimality
2025
Cited
0
From Demonstrations to Rewards: Alignment Without Explicit Human Preferences
2025
Cited
0
Bridging the Training-Inference Gap in LLMs by Leveraging Self-Generated Tokens
arXiv.org · 2024
Cited
0
Resume (English only)
Co-authors
2 total
Mingyi Hong
Associate Professor, University of Minnesota; Amazon AGI
Alfredo Garcia
Texas A&M University
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up