- 'Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache' accepted to NeurIPS 2025.
- 'TinyFusion: Diffusion Transformers Learned Shallow' selected as CVPR 2025 Highlighted Paper (3%).
Awards:
- Second Place in 2025 SkiTB Visual Tracking Challenge.
- ACM/IEEE IPSN 2024 Best Demonstration Runner-Up award.
Research Experience
Research intern at Princeton University under the supervision of Prof. Zhuang Liu.
Education
Undergraduate student at National University of Singapore (NUS); Research intern at Princeton University, supervised by Prof. Zhuang Liu; Closely collaborates with Prof. Jenq-Neng Hwang from UW and Prof. Xinchao Wang from NUS.
Background
Research Interests: Efficient Deep Learning, particularly optimizing the training and inference of LLMs, diffusion models, and multimodal models. Focuses on sparse attention, network pruning, and efficient architectures. Aims to achieve computational breakthroughs, making deep learning affordable and accessible to everyone, everywhere.