Published multiple papers, including 'TOOLVERIFIER: Generalization to New Tools via Self-Verification' and 'Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models'.
Research Experience
Currently interning at FAIR London, Meta AI, working with Dr. Jane Yu and Dr. Jason Weston on improving the tool use capability of large language models. Previously interned at Microsoft Semantic Machines (2022) and Amazon Science (2021).
Education
Ph.D. in Computer Science, expected 2025, University of California, San Diego; MS in Computer Science, 2021, University of California, San Diego; B.Tech. in Computer Science, 2017, Indian Institute of Technology, Kanpur.
Background
Ph.D. Candidate in Computer Science, with research interests including data understanding and the development of data-driven approaches to enhance NLP pipelines, particularly focusing on reducing annotation and training costs. He is also very enthusiastic about designing goal-driven language assistants.
Miscellany
Enjoys playing Ukulele, playing Football (soccer), and occasionally writes. Check out his blog!