- Subtask-Aware Visual Reward Learning from Segmented Demonstrations, ICLR 2025
- EXTRACT: Efficient Policy Learning by Extracting Transferrable Robot Skills from Offline Data, CoRL 2024
- DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset, RSS 2024
- Open X-Embodiment: Robotic Learning Datasets and RT-X Models, ICRA 2024 (Best Conference Paper Award)
- FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation, RSS 2023 (Best System Paper Award)
Academic activities: Conference reviewer for ICLR (2026, 2025), ICRA (2025), CoRL 2024, RSS 2024; co-organized the Automating Robotic Surgery Workshop at CoRL 2025 and moderated the panel discussion.
Research Experience
Worked as a software engineering consultant at Epic Games and as a software engineering intern at BinaryVR before joining KAIST.
Education
A fourth-year master/Ph.D student in the Cognitive Learning for Vision and Robotics Lab (CLVR) at KAIST, advised by Prof. Joseph J. Lim; received B.S. degree in Computer Science at Kookmin University.
Background
His research goal is to endow physical robots with the ability to carry out dexterous, long-horizon tasks. His vision begins with creating robots that can assist in our everyday lives and eventually scales up to more complex tasks, like building houses or even entire cities! This goal is rooted in his curiosity about understanding physical intelligence. To achieve this, he is developing agents with the ability to plan over long horizons, incorporating a physical understanding of the world. They also possess the capability to reactively adjust their control, much like many animals do.