1. Tandem Transformers for Inference Efficient LLMs (ICML 2024)
2. CDQuant: Greedy Coordinate Descent for Accurate LLM Quantization (Arxiv 2024)
Research Experience
Pre-Doctoral Researcher at Google DeepMind, India, working with Dr. Praneeth Netrapalli, Dr. Arun Suggala, and Dr. Prateek Jain. His work involves making LLM inference faster through quantization, speculative decoding, and sparsification. He is also working on speeding up “million-context-attention” through clustering and approximate logit computation.
Background
Interests include but are not limited to: Making LLM inference faster through next-generation architectures, quantization, sparsification, speculative decoding, KV cache compression, adaptive routing for elastic models, etc., and speeding up LLM pretraining and finetuning through better adapters, novel loss functions, better second-order optimizers and faster checkpointing.