Published papers in several AI and speech venues such as ACL, EMNLP, Interspeech, ICASSP, and IEEE SLT, and co-authored patents. 3 papers accepted at EMNLP 2025 on Efficient and Robust LLM Pre-Training (with UCSB collaboration).
Research Experience
Worked on efficient speech-processing models for Alexa devices at Amazon. Improved model size, latency, and accuracy in production systems through research in neural efficiency.
Education
Ph.D. in Computer Science and Cognitive Science from Indiana University, where he worked on neural waveform coding inspired by human learning.
Background
Senior Applied Scientist at Amazon AGI, working on large-language-model (LLM) training that blends speech and audio toward more natural, interactive intelligence. Led research in neural efficiency, developing sub-8-bit quantization-aware training and sparsification methods.
Miscellany
Enjoys indoor and outdoor sports, interacting with nature, which is key to approaching the meaning of life. Likes singing with or without an audience, whether it's a live band or in the shower.