- September 2025: A paper on a new self-supervised method for multi-channel imaging via enhanced cross-channel learning was accepted at NeurIPS 2025.
- September 2024: A paper proposing a robust ViT to handle multi-channel imaging data with missing channels at test time was accepted at NeurIPS 2024.
- January 2024: A paper on LLM communication via raw transformer output embeddings has been accepted at ICLR 2024.
- Published 'ChA-MAEViT: Unifying Channel-Aware Masked Autoencoders and Multi-Channel Vision Transformers for Improved Cross-Channel Learning', which presents four key strategies to enhance feature learning across MCI channels.
Research Experience
Worked as a research data scientist at Zalo R&D; Research assistant at Texas Tech University; Joined ByteDance’s Seed Foundation Code team as a Research Scientist Intern in summer 2024, working on Video Diffusion Models; Joined Amazon as an Applied Scientist Intern on the Geospatial team in summer 2025.
Education
Ph.D. student at Boston University, advised by Prof. Bryan Plummer. Bachelor's degree in Computer Science (Honors Program) from HCMUT, Vietnam.
Background
Research interests include Machine Learning, particularly on efficient deep learning, computer vision, the intersection of vision and language, and large language models.