Training Language Models to Reason Efficiently, Neural Information Processing Systems, 2025
MASAI: Modular Architecture for Software-engineering AI Agents, NeurIPS Workshop, 2024
GAR-meets-RAG Paradigm for Zero-Shot Information Retrieval, arXiv
Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models, EMNLP, 2023
Learning and Leveraging Verifiers to Improve Planning Capabilities of Pre-trained Language Models, KLR Workshop ICML, 2023
SymNet 3.0: Exploiting Long-Range Influences in Learning Generalized Neural Policies for Relational MDPs, UAI, 2023
SymNet 2.0: Effectively handling Non-Fluents and Actions in Generalized Neural Policies for RDDL Relational MDPs, UAI, 2022
Research Experience
Worked as a Research Fellow at Microsoft Research India, focusing on Retrieval and Software Agents, under Nagarajan Natarajan.
Education
Second year PhD student at the Machine Learning Department at Carnegie Mellon University, advised by Andrea Zanette. Previously, completed B.Tech and M.Tech from IIT Delhi, working with Mausam and Parag Singla.
Background
Interested in how to make AI systems better at reasoning and how to enable continual learning in the era of modern Deep Learning.