Ying Shen
Scholar

Ying Shen

Google Scholar ID: NytpXgwAAAAJ
Ph.D. Student of Computer Science, University of Illinois Urbana-Champaign
multimodal machine learningnatural language processingcomputer visiongenerative models
Citations & Impact
All-time
Citations
2,102
 
H-index
10
 
i10-index
10
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • - LaTtE-Flow: Layerwise Timestep-Expert Flow-based Transformer
  • - LLM Braces: Straightening Out LLM Predictions with Relevant Sub-Updates
  • - Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling
  • - Many-to-many Image Generation with Auto-regressive Diffusion Models
  • - InternalInspector I2: Robust Confidence Estimation in LLMs through Internal States
  • - Recipient of the Amazon-VT Fellowship for the 2023-2024 academic year
Research Experience
  • - Machine Learning Research Intern, Apple, May 2025 – Sep 2025, Cupertino, CA
  • - Machine Learning Research Intern, Apple, May 2023 – Aug 2023, New York, NY
  • - Research Associate, Language Technologies Institute, Carnegie Mellon University, Jan 2019 – Dec 2019, Pittsburgh, PA
  • - Graduate Research Assistant, MultiComp Laboratory, Carnegie Mellon University, worked with Prof. Louis-Philippe Morency and Prof. Graham Neubig
Education
  • - Ph.D. in Computer Science, Present, University of Illinois Urbana-Champaign
  • - Ph.D. in Computer Science, 2024, Virginia Tech, advised by Prof. Lifu Huang and Prof. Ismini Lourentzou
  • - MSc in Intelligent Information Systems, 2018, Carnegie Mellon University
  • - BEng in Software Engineering, 2017, School of Software Engineering, Fudan University
Background
  • Research Interests: Multimodal Interaction, Deep Learning, Multimodal Machine Learning, Deep Generative Models, Natural Language Processing, Computer Vision. Summary: Focused on developing efficient, controllable, adaptive, and interactive multimodal generative models to build robust AI agents capable of understanding, interpreting, and reasoning about the physical world.
Co-authors
0 total
Co-authors: 0 (list not available)