Two first-author papers accepted to NeurIPS 2025: 'Reinforcing Diffusion Models by Direct Group Preference Optimization' and 'Rewards Are Enough for Fast Photo-Realistic Text-to-image Generation'
Two first-author papers accepted to ICCV 2025: 'Learning Few-Step Diffusion Models by Trajectory Distribution Matching' (TDM) and 'Adding Additional Control to One-Step Diffusion with Joint Distribution Matching' (JDM)
Two first/co-first author papers accepted to ICLR 2025: 'You Only Sample Once' and 'Decoupled Graph Energy-based Model for Node Out-of-Distribution Detection on Heterophilic Graphs'
Proposed DGPO: 20x faster training than prior SOTA with superior performance
Introduced NCT and R0: first algorithms enabling control addition and RLHF post-training for one-step generators without diffusion distillation or images
TDM distilled PixArt into a 4-step generator using only 2 A800 hours, outperforming the teacher in real user preference
JDM enables adding teacher-unknown controls to one-step students and supports human feedback learning (HFL)
Background
Final-year PhD student at Hong Kong University of Science and Technology, supervised by Prof. Jing Tang
Research interests: Efficient Generative Models, Few-Step Text-to-Image/Video Diffusion Models, and Graph Neural Networks
Focused on high-quality and real-time generation
Reviewer for ICML, ICLR, NeurIPS, CVPR, ICCV, etc.
Expected to graduate in December 2025; actively seeking industrial research roles in China and Asia