Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation (ICCV, 2025)
AESOP: Auto-Encoded Supervision for Perceptual Image Super-Resolution (CVPR, 2025)
GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats (NeurIPS, 2024) - Winner of Qualcomm Innovation Fellowship Korea 2024 (QIFK 2024)
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer (CVPR, 2024) - Highlight
Diversity-aware Channel Pruning for StyleGAN Compression (CVPR, 2024)
Task-disruptive Background Suppression for Few-Shot Segmentation (AAAI, 2024)
Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding (Arxiv, 2023)
Frequency-based motion representation for video generative adversarial networks (TIP, 2023)
Disentangled Representation Learning for Unsupervised Neural Quantization (CVPR, 2023)
Research Experience
Recent research interest has been in 3D generative models using Generative Adversarial Networks and gaussian splatting. Also interested in various generation tasks using large-scale diffusion models.
Education
Ph.D. candidate in the Visual Computing Lab (VCLab) at Sungkyunkwan University, supervised by Prof. Jae-Pil Heo. Received Master's and Bachelor's degrees from Sungkyunkwan University.
Background
Research interests include various tasks in machine learning and computer vision, with a particular focus on generative models and video understanding.