Pioneered instruction-tuning with the first published paper on the topic.
Key contributions include: Super-NaturalInstructions (scaling), Biomedical Instruction-tuning (applications), Instruction-following Evals (evaluation), and instruction-bias (bias analysis, Outstanding Paper Award at EACL 2023).
Developed reasoning methods such as reframing, question-decomposition, math via programs, help me think, and take a step back; created datasets including NumGLUE, FeasibilityQA, and TarGEN.
Proposed Self-Instruct (training) and Self-Discover (evaluation) for model self-improvement, and analyzed limitations of self-correction.
Awards: GDM Impact Award 2025 (for IMO 2024 Silver Medal contribution), Odia Bhasa Samman 2025, ASU Dean’s Dissertation Award 2023, CEN-CS Outstanding Student Award 2023.
Paper "Natural Instructions" received the AI2 "Lasting Impact Paper Award" 3.5 years after publication.
Co-organized the MATH-AI workshop at NeurIPS 2022 and NeurIPS 2024.
Holds multiple patents on LLM reasoning, scaling, post-training, self-improvement, and inference.
Miscellany
Featured in media coverage including The New York Times (IMO Silver Medal), Kanak News interview, and profile "The Making of a Scientist: From Classroom Rebel to AI Innovator".
Participated in podcasts on research initiation, instruction tuning, prompt engineering, and self-improving LLMs, including the NIST Podcast Series.
Engaged in panel discussions at NeurIPS and ACL with leading AI researchers; delivered invited talks at Allen AI, IIT Kanpur, University of Toronto, Google Research, and Facebook AI Research.
Interviewed Jeff Dean at the Odisha AI Conference 2024.
Personal journey highlighted in the "Ones Changing The World - 1CW" podcast.