Published several papers, including 'Scale-wise Distillation of Diffusion Models', 'Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis', 'Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps', and more. Organizes seminars, posts paper reviews, and teaches the Visual GenAI course at the Yandex School of Data Analysis.
Research Experience
Was a research intern at Meta AI in 2022, working on large-scale vector search and retrieval-augmented LLMs, advised by Matthijs Douze and Zeki Yalniz. Interned at ETH Zurich in 2019, working with Vincent Fortuin and Stephan Mandt on generative modeling for missing data imputation.
Education
Received his Ph.D. in Computer Science from HSE University in December 2024, advised by Artem Babenko.
Background
Lead Research Scientist at Yandex Research. His work broadly focuses on diffusion-based models and exploring effective synergies between different generative paradigms. He is dedicated to making large-scale visual generative models more efficient and practical. Highly interested in extending the use of generative models to various downstream tasks, such as leveraging them as strong feature extractors or data engines.