Publications: 1. Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation (arXiv 2025); 2. AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers (CVPR 2025); 3. VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control (ICLR 2025); 4. SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation (ICLR 2025); 5. TC4D: Trajectory-Conditioned Text-to-4D Generation (ECCV 2024); 6. 4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling (CVPR 2024); 7. CC3D: Layout-Conditioned Generation of Compositional 3D Scenes (ICCV 2023); 8. 3D-Aware Video Generation (TMLR 2023); 9. Semantic Self-adaptation: Enhancing Generalization with a Single Sample (TMLR 2023); 10. Towards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective (CVPR 2022).
Research Experience
Position: Research Intern; Company: Snap Inc.; Project: Worked in the Creative Vision Group of Sergey Tulyakov. Position: Research Intern; Company: NVIDIA Spatial Intelligence Lab; Lead: Sanja Fidler; Project: Video and 3D generative models.
Education
Degree: PhD; Institution: University of Toronto; Advisors: David Lindell, Andrea Tagliasacchi; Time: Not provided; Field: Computer Science. Degree: Bachelor's; Institution: TU Darmstadt; Advisor: Stefan Roth; Time: Not provided; Field: Computational Engineering.
Background
Research Interests: Controllable video, 3D, and 4D generation. Background: PhD student in Computer Science at the University of Toronto, supervised by David Lindell and Andrea Tagliasacchi. Research intern at NVIDIA Spatial Intelligence Lab, working on video and 3D generative models under Sanja Fidler.
Miscellany
Personal Interests: Looking for a full-time position in industry or at a startup focused on generative models.