- 2025.09.15 Paper "Option-aware Temporally Abstracted Value for Offline Goal-Conditioned Reinforcement Learning" accepted at NeurIPS 2025 as a Spotlight paper
- 2025.01.23 Paper "Prevalence of Negative Transfer in Continual Reinforcement Learning: Analyses and a Simple Baseline" accepted at ICLR 2025
- 2024.05.02 Paper "Listwise Reward Estimation for Offline Preference-based Reinforcement Learning" accepted at ICML 2024
- 2022.09.15 Paper "Descent Steps of a Relation-Aware Energy Produce Heterogeneous Graph Neural Networks" accepted at NeurIPS 2022 (joint work with Amazon Web Services)
- 2021.07.23 Paper "SS-IL: Separated Softmax for Incremental Learning" accepted at ICCV 2021
- 2020.09.26 Paper "Continual Learning with Node-Importance based Adaptive Group Sparse Regularization" accepted at NeurIPS 2020
- 2020.05.15 Paper "Iterative Channel Estimation for Discrete Denoising under Channel Uncertainty" accepted at UAI 2020
- 2019.09.03 Paper "Uncertainty-based continual learning with adaptive regularization" accepted at NeurIPS 2019
Research Experience
- 2025.09.15 Started internship at Trillion Labs
- 2021.11.15 Started internship at Amazon Web Services
Education
- PhD Course [2022.03~Present], Department of Electrical and Computer Engineering, Seoul National University (SNU), Advisor: Taesup Moon
- M.S [2019.09~2022.02], Department of Artificial Intelligence, Sungkyunkwan University (SKKU), Advisor: Taesup Moon
- B.S [2015.03~2019.08], Department of Electrical & Computer Engineering, Sungkyunkwan University (SKKU)
Background
Research Interests: Continual Learning, Discrete Signal Denoising, Graph Neural Network, Reinforcement Learning.