- Policy-labeled Preference Learning: Is Preference Enough for RLHF? (ICML 2025)
- Bellman Unbiasedness: Toward Provably Efficient Distributional Reinforcement Learning with General Value Function Approximation (ICML 2025)
- Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees (NeurIPS 2024)
- D2NAS: Efficient Neural Architecture Search with Performance Improvement and Model Size Reduction for Diverse Tasks (IEEE Access)
- Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion (NeurIPS 2023)
- SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning (NeurIPS 2023)
- On the Convergence of Continual Learning with Adaptive Methods (UAI 2023)
- Adaptive Methods for Nonconvex Continual Learning (NeurIPS 2022 Optimization for Machine Learning Workshop)
- Perturbed Quantile Regression for Distributional Reinforcement Learning (NeurIPS 2022)
Research Experience
Position: PhD Student; Work Experience: Conducting research at the Communications and Machine Learning Laboratory, Seoul National University.
Education
Degree: PhD; University: Seoul National University; Major: Electrical and Computer Engineering; Advisor: Jungwoo Lee. Bachelor's Degree: Electrical Engineering; University: Seoul National University.
Background
Research Interests: Reinforcement learning, robot learning, optimization, and representation learning. Brief Introduction: Seungyub Han is a PhD student at the Communications and Machine Learning Laboratory in the Department of Electrical and Computer Engineering at Seoul National University, supervised by Jungwoo Lee.
Miscellany
Contact: seungyubhan@snu.ac.kr; Personal Website: https://skylerhallinan.com/; GitHub, Google Scholar, LinkedIn, CV